[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: if vs. when vs. and: style question

From: Rusi
Subject: Re: if vs. when vs. and: style question
Date: Sun, 29 Mar 2015 18:55:41 -0700 (PDT)
User-agent: G2/1.0

On Sunday, March 29, 2015 at 7:36:17 PM UTC+5:30, Óscar Fuentes wrote:
> Rusi  writes:
> [snip]
> > And even Elisp!
> >
> > *** Welcome to IELM ***  Type (describe-mode) for help.
> > ELISP> (setq α 1 β 2 γ 3)
> > 3 (#o3, #x3, ?\C-c)
> > ELISP> (list α β γ)
> > (1 2 3)
> Some months ago I experimented with using Unicode on my coding. I was
> very excited about it. At the end, the experience showed without a doubt
> that it is a bad idea. One of the reasons is very familiar to us: a
> funtamental feature of a programmer's font is how clearly it
> distinguishes 1 from l, 0 from O. Using Unicode makes this problem
> explode.

Thank you Oscar for some (rather rare) reasonable argument.

[Compared to yours much of the rest I see here is on the lines:
"Since my keyboard is broken; kindly allow me to break yours!"

As I pointed out earlier what you point out is true and considerably worse than 

The point is that this choice has already been made: many languages are already
*carelessly* accepting unicode.

Some are a little more laissez-faire than others:
1. "flag" and "flag" are the same identifier in python; but different in 
haskell and elisp.  IMHO python has made the more sane choice

2. Haskell and Elisp accept x₁ , Python doesn't. I think python is wrong here

3. Haskell allows → for -> ← for <- (very heavily used in haskell)
However the symbol that defines its very identity and is its logo – λ –
it does not allow because its in letter category.

> > ELISP> 
> >
> > How much costly was that α to type than alpha?? One backslash!!
> >
> > Add to that the fact that programs are read
> > - 10 times more than written during development
> > - 100 times more during maintenance
> Precisely, my experience is that Unicode makes things much harder to
> read, and not ony because the problem mentioned above.

You are raising a point about a certain software/hardware that we all use and
that no one understands – our brains. Consider:

APL is generally regarded as unreadable. [That it is often derisively called
'writeonly' not unwriteable is a separate discussion]

But neither is Cobol regarded as readable. And worst of all is machine-language.

If an arbitrarily restricted charset were a good fit to our brains, Cobol, which
expressly tries to mirror layman prose (ie stay within [A-Za-z0-9] ) would have 
worked better. And while our machines seem mighty pleased with building the 
universe from {0,1}, our brains (ok at least mine) suffers on reading 

So where on the spectrum between APL/conventional laissez-faire math and
Cobol/Machine-code is the optimum?

I believe this is an open question

reply via email to

[Prev in Thread] Current Thread [Next in Thread]