Discussion:
Engineering degree for embedded systems
(too old to reply)
hogwarts
2017-07-27 12:35:14 UTC
Permalink
I am applying for university right now and I am wondering whic
engineering degree is better for working on embedded systems and IOT
"Computer engineering" vs "electronics and communication engineering" als
a specific university offers "computer and communication engineering"
know that having any of those I can get into IoT but which would be bette
for the field?


--------------------------------------
Posted through http://www.EmbeddedRelated.com
Stef
2017-07-27 13:25:09 UTC
Permalink
I am applying for university right now and I am wondering which
"Computer engineering" vs "electronics and communication engineering" also
a specific university offers "computer and communication engineering" I
know that having any of those I can get into IoT but which would be better
for the field?
As always: that depends.

I don't know the particular programs, so just going by the titles:-\

A lot depends what you want to do on "embedded systems and IoT". Do you
want to work on the hardware, low level embedded software, higher level
embedded software, server backend, front end, ...

Consider an hypothetical internet connected thermometer:
Do you want to measure the NTC voltage and convert to degrees KFC? Or
do you want to write the App on the phone to display the value? Or
something inbetween? If you want to do it all, I think it's best to start
close to one of the extremes. Work your way to the other extreme by
experience or additional educaation. But YMMV.

If you want to work on the hardware or software more or less closely
related to the hardware, my bet would be on the "electronics ..." degree.
But I'm biased ofcourse. I have seen on multiple instances, that software
engineers with no electronics background have difficulty reading processor
datasheets and electronics schematics. And sometimes fail to really
understand what those mean.

Example:
On a product we had a microcontroller with an internal reference voltage
that was factory calibrated to 2% accuracy. The datasheet also explained
how to use a measurement of this reference to correct ADC data on other
channels. This was implemented in the software. Now, this seems fine, but:
The ADC actually uses an external reference and if this reference is
inaccurate, this correction process does help. In our case, the external
reference was 0.5%, meaning a measurement with an accuracy of 0.5% is
'corrected' with a reference with 2% accuracy.

So, enough of that. ;-)

So what do you see yourself doing after your education and what does have
your personal interest? Check that first, and then compare that to the
offered education. Also pay attention to the level of theory/practice.
Learning Maxwells laws does not make you solder better. ;-)
--
Stef (remove caps, dashes and .invalid from e-mail address to reply by mail)

Learning French is trivial: the word for horse is cheval, and everything else
follows in the same way.
-- Alan J. Perlis
Phil Hobbs
2017-07-30 16:05:27 UTC
Permalink
Post by Stef
I am applying for university right now and I am wondering which
"Computer engineering" vs "electronics and communication engineering" also
a specific university offers "computer and communication engineering" I
know that having any of those I can get into IoT but which would be better
for the field?
As always: that depends.
I don't know the particular programs, so just going by the titles:-\
A lot depends what you want to do on "embedded systems and IoT". Do you
want to work on the hardware, low level embedded software, higher level
embedded software, server backend, front end, ...
Do you want to measure the NTC voltage and convert to degrees KFC? Or
do you want to write the App on the phone to display the value? Or
something inbetween? If you want to do it all, I think it's best to start
close to one of the extremes. Work your way to the other extreme by
experience or additional educaation. But YMMV.
If you want to work on the hardware or software more or less closely
related to the hardware, my bet would be on the "electronics ..." degree.
But I'm biased ofcourse. I have seen on multiple instances, that software
engineers with no electronics background have difficulty reading processor
datasheets and electronics schematics. And sometimes fail to really
understand what those mean.
On a product we had a microcontroller with an internal reference voltage
that was factory calibrated to 2% accuracy. The datasheet also explained
how to use a measurement of this reference to correct ADC data on other
The ADC actually uses an external reference and if this reference is
inaccurate, this correction process does help. In our case, the external
reference was 0.5%, meaning a measurement with an accuracy of 0.5% is
'corrected' with a reference with 2% accuracy.
So, enough of that. ;-)
So what do you see yourself doing after your education and what does have
your personal interest? Check that first, and then compare that to the
offered education. Also pay attention to the level of theory/practice.
Learning Maxwells laws does not make you solder better. ;-)
Another thing is to concentrate the course work on stuff that's hard to
pick up on your own, i.e. math and the more mathematical parts of
engineering (especially signals & systems and electrodynamics).
Programming you can learn out of books without much difficulty, and with
a good math background you can teach yourself anything you need to know
about.

Just learning MCUs and FPGAs is a recipe for becoming obsolete.

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
Tom Gardner
2017-07-30 18:05:53 UTC
Permalink
Another thing is to concentrate the course work on stuff that's hard to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
and with a good math background you can
teach yourself anything you need to know about.
Agreed.
Just learning MCUs and FPGAs is a recipe for becoming obsolete.
There's always a decision to be made as to whether to
be a generalist or a specialist. Both options are
valid, and they have complementary advantages and
disadvantages.
Phil Hobbs
2017-08-01 12:55:20 UTC
Permalink
Post by Tom Gardner
Another thing is to concentrate the course work on stuff that's hard to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I
was a teenager. I picked up good habits from reading books and other
people's code.

Security is another issue. I don't do IoT things myself (and try not to
buy them either), but since that's the OP's interest, I agree that one
should add security/cryptography to the list of subjects to learn about
at school.
Post by Tom Gardner
and with a good math background you can
teach yourself anything you need to know about.
Agreed.
Just learning MCUs and FPGAs is a recipe for becoming obsolete.
There's always a decision to be made as to whether to
be a generalist or a specialist. Both options are
valid, and they have complementary advantages and
disadvantages.
Being a specialist is one thing, but getting wedded to one set of tools
and techniques is a problem.

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
Tom Gardner
2017-08-01 13:23:26 UTC
Permalink
Post by Tom Gardner
Another thing is to concentrate the course work on stuff that's hard to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I was a
teenager. I picked up good habits from reading books and other people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.

Nowadays it is normal to find youngsters[1] that have an
inkling beyond the particular language they've been taught,
plus one or two "abstract" problems. Typical statements:
"FSMs? Oh, yes, they are something to do with compilers."
"Caches? Oh yes, they are part of the library"
"L1/2/3 caches? <silence>"
"GCs? They reference count and have long pauses"
"Distributed computing failures? The software framework
deals with those"

[1] i.e. the ones that HR-droids like to hire because
they are cheap and not ornery
Security is another issue. I don't do IoT things myself (and try not to buy
them either), but since that's the OP's interest, I agree that one should add
security/cryptography to the list of subjects to learn about at school.
I like the cryptographers' aphorism "if you think
cryptography will solve your problem, you don't
understand cryptography and you don't understand
your problem."

A quick sanity check is always to investigate how
certificates are revoked when (not if) they are
compromised. That's an Achilles Heel of /all/
biometric systems.
Post by Tom Gardner
and with a good math background you can
teach yourself anything you need to know about.
Agreed.
Just learning MCUs and FPGAs is a recipe for becoming obsolete.
There's always a decision to be made as to whether to
be a generalist or a specialist. Both options are
valid, and they have complementary advantages and
disadvantages.
Being a specialist is one thing, but getting wedded to one set of tools and
techniques is a problem.
Very true. Unfortunately that is encouraged in the s/w
world because the recruiters and HR-droids can't extrapolate
skills from one technology into a (slightly) different
technology.

Sometimes it manifests itself as self-inflicted
cargo-cult engineering. As I taught my daughter...

"Mummy, why do you cut off the end of the leg of lamb
when you roast it?"

"Your granny always did it, and her roasts were delicious.
Ask her"

"Granny, why did you cut off the end of the leg of lamb
when you roasted it?"

"Why did I what? ... Oh yes, it was so the joint would
fit in the small oven".
Phil Hobbs
2017-08-03 15:03:49 UTC
Permalink
Post by Tom Gardner
Post by Tom Gardner
Another thing is to concentrate the course work on stuff that's hard to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I was a
teenager. I picked up good habits from reading books and other people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with
two exceptions (gdb and Arduino, I'm looking at you). Plus there are a
whole lot of libraries available (for Python especially) so a determined
beginner can get something cool working (after a fashion) fairly fast.

BITD I did a lot of coding with MS C 6.0 for DOS and OS/2, and before
that, MS Quickbasic and (an old fave) HP Rocky Mountain Basic, which
made graphics and instrument control a breeze. Before that, as an
undergraduate I taught myself FORTRAN-77 while debugging some Danish
astronemer's Monte Carlo simulation code. I never did understand how it
worked in any great depth, but I got through giving a talk on it OK. It
was my first and last Fortran project.

Before that, I did a lot of HP calculator programming (HP25C and HP41C).
I still use a couple of those 41C programs from almost 40 years ago.
There was a hacking club called PPC that produced a hacking ROM for the
41C that I still have, though it doesn't always work anymore.

Seems as though youngsters mostly start with Python and then start in on
either webdev or small SBCs using Arduino / AVR Studio / Raspbian or
(for the more ambitious) something like BeagleBone or (a fave)
LPCxpresso. Most of my embedded work is pretty light-duty, so an M3 or
M4 is good medicine. I'm much better at electro-optics and analog/RF
circuitry than at MCUs or HDL, so I do only enough embedded things to
get the whole instrument working. Fancy embedded stuff I either leave
to the experts, do in hardware, or hive off to an outboard computer via
USB serial, depending on the project.

It's certainly true that things get complicated fast, but they did in
the old days too. Of course the reasons are different: nowadays it's
the sheer complexity of the silicon and the tools, whereas back then it
was burn-and-crash development, flaky in-system emulators, and debuggers
which (if they even existed) were almost as bad as Arduino.

I still have nightmares about the horribly buggy PIC C17 compiler for
the PIC17C452A, circa 1999. I was using it in an interesting very low
cost infrared imager <http://electrooptical.net#footprints>. I had an
ICE, which was a help, but I spent more time finding bug workarounds
than coding.

Eventually when the schedule permitted I ported the code to HiTech C,
which was a vast improvement. Microchip bought HiTech soon thereafter,
and PIC C died a well deserved but belated death.

My son and I are doing a consulting project together--it's an M4-based
concentrator unit for up to 6 UV/visible/near IR/thermal IR sensors for
a fire prevention company. He just got the SPI interrupt code working
down on the metal a couple of minutes ago. It's fun when your family
understands what you do. :)

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
Tom Gardner
2017-08-06 09:35:03 UTC
Permalink
Post by Tom Gardner
Post by Tom Gardner
Another thing is to concentrate the course work on stuff that's hard to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I was a
teenager. I picked up good habits from reading books and other people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are a whole lot of
libraries available (for Python especially) so a determined beginner can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.

The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Seems as though youngsters mostly start with Python and then start in on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for the more
ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine. I'm much
better at electro-optics and analog/RF circuitry than at MCUs or HDL, so I do
only enough embedded things to get the whole instrument working. Fancy embedded
stuff I either leave to the experts, do in hardware, or hive off to an outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
It's certainly true that things get complicated fast, but they did in the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.

That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.

Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.

Another example is that C/C++ is routinely used to develop
multi threaded code, e.g. using PThreads. That's despite
C/C++ specifically being unable to guarantee correct
operation on modern machines! Most developers are
blissfully unaware of (my *emphasis*):

Threads Cannot be Implemented as a Library
Hans-J. Boehm
HP Laboratories Palo Alto
November 12, 2004 *
In many environments, multi-threaded code is written in a language that
was originally designed without thread support (e.g. C), to which a
library of threading primitives was subsequently added. There appears to
be a general understanding that this is not the right approach. We provide
specific arguments that a pure library approach, in which the compiler is
designed independently of threading issues, cannot guarantee correctness
of the resulting code.
We first review why the approach *almost* works, and then examine some
of the *surprising behavior* it may entail. We further illustrate that there
are very simple cases in which a pure library-based approach seems
*incapable of expressing* an efficient parallel algorithm.
Our discussion takes place in the context of C with Pthreads, since it is
commonly used, reasonably well specified, and does not attempt to
ensure type-safety, which would entail even stronger constraints. The
issues we raise are not specific to that context.
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
I still have nightmares about the horribly buggy PIC C17 compiler for the
PIC17C452A, circa 1999. I was using it in an interesting very low cost infrared
imager <http://electrooptical.net#footprints>. I had an ICE, which was a help,
but I spent more time finding bug workarounds than coding.
There are always crap instantiations of tools, but they
can be avoided. I'm more concerned about tools where the
specification prevents good and safe tools.
Eventually when the schedule permitted I ported the code to HiTech C, which was
a vast improvement. Microchip bought HiTech soon thereafter, and PIC C died a
well deserved but belated death.
My son and I are doing a consulting project together--it's an M4-based
concentrator unit for up to 6 UV/visible/near IR/thermal IR sensors for a fire
prevention company. He just got the SPI interrupt code working down on the
metal a couple of minutes ago. It's fun when your family understands what you
do. :)
Lucky you -- I think! I've never been convinced of the
wisdom of mixing work and home life, and family businesses
seem to be the source material for reality television :)
John Devereux
2017-08-06 13:40:21 UTC
Permalink
Post by Tom Gardner
Post by Tom Gardner
Post by Tom Gardner
Another thing is to concentrate the course work on stuff that's hard to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I was a
teenager. I picked up good habits from reading books and other people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are a whole lot of
libraries available (for Python especially) so a determined beginner can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Seems as though youngsters mostly start with Python and then start in on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for the more
ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine. I'm much
better at electro-optics and analog/RF circuitry than at MCUs or HDL, so I do
only enough embedded things to get the whole instrument working. Fancy embedded
stuff I either leave to the experts, do in hardware, or hive off to an outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
It's certainly true that things get complicated fast, but they did in the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.

https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete


Or a C prime number generator that mostly uses the preprocessor

https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint

At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.

Compile Time Maze Generator (and Solver)


Or more topically for embedded systems there are things like kvasir
which do a lot of compile-time work to ~perfectly optimise register
accesses and hardware initialisation

https://github.com/kvasir-io/Kvasir

[...]
--
John Devereux
rickman
2017-08-06 16:51:59 UTC
Permalink
Post by John Devereux
Post by Tom Gardner
Post by Tom Gardner
Post by Tom Gardner
Another thing is to concentrate the course work on stuff that's hard to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I was a
teenager. I picked up good habits from reading books and other people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are a whole lot of
libraries available (for Python especially) so a determined beginner can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Seems as though youngsters mostly start with Python and then start in on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for the more
ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine. I'm much
better at electro-optics and analog/RF circuitry than at MCUs or HDL, so I do
only enough embedded things to get the whole instrument working. Fancy embedded
stuff I either leave to the experts, do in hardware, or hive off to an outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
It's certainly true that things get complicated fast, but they did in the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for
decades. Why is this important in other languages now?
--
Rick C
Tom Gardner
2017-08-06 19:13:08 UTC
Permalink
Post by John Devereux
Post by Tom Gardner
Post by Tom Gardner
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff that's hard
to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I was a
teenager. I picked up good habits from reading books and other people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are a whole lot of
libraries available (for Python especially) so a determined beginner can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Seems as though youngsters mostly start with Python and then start in on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for the more
ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine. I'm much
better at electro-optics and analog/RF circuitry than at MCUs or HDL, so I do
only enough embedded things to get the whole instrument working. Fancy embedded
stuff I either leave to the experts, do in hardware, or hive off to an outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
It's certainly true that things get complicated fast, but they did in the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for decades.
Why is this important in other languages now?
It isn't important.

What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.

That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
rickman
2017-08-06 23:21:16 UTC
Permalink
Post by Tom Gardner
Post by John Devereux
Post by Tom Gardner
Post by Tom Gardner
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff that's hard
to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I was a
teenager. I picked up good habits from reading books and other people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are a whole lot of
libraries available (for Python especially) so a determined beginner can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Seems as though youngsters mostly start with Python and then start in on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for the more
ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or HDL, so I do
only enough embedded things to get the whole instrument working. Fancy embedded
stuff I either leave to the experts, do in hardware, or hive off to an outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
It's certainly true that things get complicated fast, but they did in the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for decades.
Why is this important in other languages now?
It isn't important.
What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.
That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
That's the point. Forth is one of the simplest development tools you will
ever find. It also has some of the least constraints. The only people who
think it is a bad idea are those who think RPN is a problem and object to
other trivial issues.
--
Rick C
Phil Hobbs
2017-08-07 16:40:15 UTC
Permalink
Post by rickman
Post by Tom Gardner
Post by John Devereux
Post by Tom Gardner
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff that's hard
to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it
since
I was a
teenager. I picked up good habits from reading books and other
people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are
a whole
lot of
libraries available (for Python especially) so a determined beginner can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Seems as though youngsters mostly start with Python and then start in on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for
the
more
ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or HDL, so I do
only enough embedded things to get the whole instrument working.
Fancy
embedded
stuff I either leave to the experts, do in hardware, or hive off
to an
outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
It's certainly true that things get complicated fast, but they did in the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for decades.
Why is this important in other languages now?
It isn't important.
What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.
That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
That's the point. Forth is one of the simplest development tools you
will ever find. It also has some of the least constraints. The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues.
I used to program in RPN routinely, still use RPN calculators
exclusively, and don't like Forth. Worrying about the state of the
stack is something I much prefer to let the compiler deal with. It's
like C functions with ten positional parameters.

Cheers

Phil "existence proof" Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
rickman
2017-08-07 19:18:40 UTC
Permalink
Post by Phil Hobbs
Post by rickman
Post by Tom Gardner
Post by John Devereux
Post by Tom Gardner
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff that's hard
to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it
since
I was a
teenager. I picked up good habits from reading books and other
people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are
a whole
lot of
libraries available (for Python especially) so a determined beginner can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Seems as though youngsters mostly start with Python and then start in on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for
the
more
ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or HDL, so I do
only enough embedded things to get the whole instrument working.
Fancy
embedded
stuff I either leave to the experts, do in hardware, or hive off
to an
outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
It's certainly true that things get complicated fast, but they did in the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was
burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for decades.
Why is this important in other languages now?
It isn't important.
What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.
That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
That's the point. Forth is one of the simplest development tools you
will ever find. It also has some of the least constraints. The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues.
I used to program in RPN routinely, still use RPN calculators
exclusively, and don't like Forth. Worrying about the state of the
stack is something I much prefer to let the compiler deal with. It's
like C functions with ten positional parameters.
If you are writing Forth code and passing 10 items into a definition, you
have missed a *lot* on how to write Forth code. I can see why you are
frustrated.
--
Rick C
Phil Hobbs
2017-08-07 19:27:02 UTC
Permalink
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Tom Gardner
Post by John Devereux
Post by Tom Gardner
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff that's hard
to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it
since
I was a
teenager. I picked up good habits from reading books and other
people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are
a whole
lot of
libraries available (for Python especially) so a determined
beginner
can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Seems as though youngsters mostly start with Python and then
start in
on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for
the
more
ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or
HDL,
so I do
only enough embedded things to get the whole instrument working.
Fancy
embedded
stuff I either leave to the experts, do in hardware, or hive off
to an
outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
It's certainly true that things get complicated fast, but they did in
the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was
burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for decades.
Why is this important in other languages now?
It isn't important.
What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.
That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
That's the point. Forth is one of the simplest development tools you
will ever find. It also has some of the least constraints. The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues.
I used to program in RPN routinely, still use RPN calculators
exclusively, and don't like Forth. Worrying about the state of the
stack is something I much prefer to let the compiler deal with. It's
like C functions with ten positional parameters.
If you are writing Forth code and passing 10 items into a definition,
you have missed a *lot* on how to write Forth code. I can see why you
are frustrated.
I'm not frustrated, partly because I haven't written anything in Forth
for over 30 years. ;)

And I didn't say I was passing 10 parameters to a Forth word, either.
It's just that having to worry about the state of the stack is so 1975.
I wrote my last HP calculator program in the early '80s, and have no
burning desire to do that again either.

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
rickman
2017-08-07 20:47:37 UTC
Permalink
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Tom Gardner
Post by John Devereux
Post by Tom Gardner
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff
that's hard
to pick up
on your own, i.e. math and the more mathematical parts of
engineering
(especially signals & systems and electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it
since
I was a
teenager. I picked up good habits from reading books and other
people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are
a whole
lot of
libraries available (for Python especially) so a determined
beginner
can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Seems as though youngsters mostly start with Python and then
start in
on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for
the
more
ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or
HDL,
so I do
only enough embedded things to get the whole instrument working.
Fancy
embedded
stuff I either leave to the experts, do in hardware, or hive off
to an
outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
It's certainly true that things get complicated fast, but they did in
the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was
burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for decades.
Why is this important in other languages now?
It isn't important.
What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.
That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
That's the point. Forth is one of the simplest development tools you
will ever find. It also has some of the least constraints. The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues.
I used to program in RPN routinely, still use RPN calculators
exclusively, and don't like Forth. Worrying about the state of the
stack is something I much prefer to let the compiler deal with. It's
like C functions with ten positional parameters.
If you are writing Forth code and passing 10 items into a definition,
you have missed a *lot* on how to write Forth code. I can see why you
are frustrated.
I'm not frustrated, partly because I haven't written anything in Forth
for over 30 years. ;)
And I didn't say I was passing 10 parameters to a Forth word, either.
It's just that having to worry about the state of the stack is so 1975.
I wrote my last HP calculator program in the early '80s, and have no
burning desire to do that again either.
You clearly mentioned 10 parameters, no?

I get that you don't fully understand Forth. When I said "The only people
who think it is a bad idea are those who think RPN is a problem and object
to other trivial issues" by other trivial issues I was referring to the use
of the stack.
--
Rick C
Phil Hobbs
2017-08-07 21:30:35 UTC
Permalink
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Tom Gardner
Post by rickman
Post by John Devereux
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff
that's hard
to pick up
on your own, i.e. math and the more mathematical parts of
engineering
(especially signals & systems and electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it
since
I was a
teenager. I picked up good habits from reading books and other
people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition,
with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are
a whole
lot of
libraries available (for Python especially) so a determined
beginner
can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Post by Phil Hobbs
Seems as though youngsters mostly start with Python and then
start in
on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for
the
more
ambitious) something like BeagleBone or (a fave) LPCxpresso.
Most
of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or
HDL,
so I do
only enough embedded things to get the whole instrument working.
Fancy
embedded
stuff I either leave to the experts, do in hardware, or hive off
to an
outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
Post by Phil Hobbs
It's certainly true that things get complicated fast, but they did in
the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was
burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for
decades.
Why is this important in other languages now?
It isn't important.
What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.
That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
That's the point. Forth is one of the simplest development tools you
will ever find. It also has some of the least constraints. The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues.
I used to program in RPN routinely, still use RPN calculators
exclusively, and don't like Forth. Worrying about the state of the
stack is something I much prefer to let the compiler deal with. It's
like C functions with ten positional parameters.
If you are writing Forth code and passing 10 items into a definition,
you have missed a *lot* on how to write Forth code. I can see why you
are frustrated.
I'm not frustrated, partly because I haven't written anything in Forth
for over 30 years. ;)
And I didn't say I was passing 10 parameters to a Forth word, either.
It's just that having to worry about the state of the stack is so 1975.
I wrote my last HP calculator program in the early '80s, and have no
burning desire to do that again either.
You clearly mentioned 10 parameters, no?
Yes, I was making the point that having to keep the state of the stack
in mind was error prone in the same way as passing that many parameters
in C. It's also annoying to document. In C, I don't have to say what
the values of the local varables are--it's clear from the code.
Post by rickman
I get that you don't fully understand Forth. When I said "The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues" by other trivial issues I was
referring to the use of the stack.
Well, the fact that you think of Forth's main wart as a trivial issue is
probably why you like it. ;)

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
rickman
2017-08-07 22:35:01 UTC
Permalink
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Tom Gardner
Post by rickman
Post by John Devereux
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff
that's hard
to pick up
on your own, i.e. math and the more mathematical parts of
engineering
(especially signals & systems and electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it
since
I was a
teenager. I picked up good habits from reading books and other
people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition,
with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are
a whole
lot of
libraries available (for Python especially) so a determined
beginner
can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Post by Phil Hobbs
Seems as though youngsters mostly start with Python and then
start in
on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for
the
more
ambitious) something like BeagleBone or (a fave) LPCxpresso.
Most
of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or
HDL,
so I do
only enough embedded things to get the whole instrument working.
Fancy
embedded
stuff I either leave to the experts, do in hardware, or hive off
to an
outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
Post by Phil Hobbs
It's certainly true that things get complicated fast, but they did in
the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was
burn-and-crash
development, flaky in-system emulators, and debuggers which (if
they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for
decades.
Why is this important in other languages now?
It isn't important.
What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.
That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
That's the point. Forth is one of the simplest development tools you
will ever find. It also has some of the least constraints. The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues.
I used to program in RPN routinely, still use RPN calculators
exclusively, and don't like Forth. Worrying about the state of the
stack is something I much prefer to let the compiler deal with. It's
like C functions with ten positional parameters.
If you are writing Forth code and passing 10 items into a definition,
you have missed a *lot* on how to write Forth code. I can see why you
are frustrated.
I'm not frustrated, partly because I haven't written anything in Forth
for over 30 years. ;)
And I didn't say I was passing 10 parameters to a Forth word, either.
It's just that having to worry about the state of the stack is so 1975.
I wrote my last HP calculator program in the early '80s, and have no
burning desire to do that again either.
You clearly mentioned 10 parameters, no?
Yes, I was making the point that having to keep the state of the stack
in mind was error prone in the same way as passing that many parameters
in C. It's also annoying to document. In C, I don't have to say what
the values of the local varables are--it's clear from the code.
Yes, it is error prone in the same way adding numbers is to a fourth grader.
So use a calculator... but that's actually slower and can't be done if you
don't have a calculator! That's the analogy I would use. Dealing with the
stack is trivial if you make a small effort.

Once I was in a discussion about dealing with the problems of debugging
stack errors which usually are a mismatch between the number of parameters
passed to/from and the number the definition is actually using. This is
exactly the sort of problem a compiler can check, but typically is not done
in Forth. Jeff Fox simply said something like, this proves the programmer
can't count. I realized how simple the truth is. When considered in the
context of how Forth programs are debugged this is simply not a problem
worth dealing with by the compiler. If you learn more about Forth you will
see that.

The stack is not the problem.
Post by Phil Hobbs
Post by rickman
I get that you don't fully understand Forth. When I said "The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues" by other trivial issues I was
referring to the use of the stack.
Well, the fact that you think of Forth's main wart as a trivial issue is
probably why you like it. ;)
Yes, I expect you would call this a wart too...

Loading Image...

I think Forth's biggest problem is people who can't see the beauty for the
mark.
--
Rick C
Phil Hobbs
2017-08-07 22:56:23 UTC
Permalink
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Tom Gardner
Post by rickman
Post by John Devereux
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff
that's hard
to pick up
on your own, i.e. math and the more mathematical parts of
engineering
(especially signals & systems and electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much
difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it
since
I was a
teenager. I picked up good habits from reading books and other
people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition,
with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are
a whole
lot of
libraries available (for Python especially) so a determined
beginner
can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Post by Phil Hobbs
Seems as though youngsters mostly start with Python and then
start in
on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for
the
more
ambitious) something like BeagleBone or (a fave) LPCxpresso.
Most
of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or
HDL,
so I do
only enough embedded things to get the whole instrument working.
Fancy
embedded
stuff I either leave to the experts, do in hardware, or hive off
to an
outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
Post by Phil Hobbs
It's certainly true that things get complicated fast, but they did in
the old
days too. Of course the reasons are different: nowadays
it's the
sheer
complexity of the silicon and the tools, whereas back then it was
burn-and-crash
development, flaky in-system emulators, and debuggers which (if
they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in
modern
c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for
decades.
Why is this important in other languages now?
It isn't important.
What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.
That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
That's the point. Forth is one of the simplest development tools you
will ever find. It also has some of the least constraints. The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues.
I used to program in RPN routinely, still use RPN calculators
exclusively, and don't like Forth. Worrying about the state of the
stack is something I much prefer to let the compiler deal with. It's
like C functions with ten positional parameters.
If you are writing Forth code and passing 10 items into a definition,
you have missed a *lot* on how to write Forth code. I can see why you
are frustrated.
I'm not frustrated, partly because I haven't written anything in Forth
for over 30 years. ;)
And I didn't say I was passing 10 parameters to a Forth word, either.
It's just that having to worry about the state of the stack is so 1975.
I wrote my last HP calculator program in the early '80s, and have no
burning desire to do that again either.
You clearly mentioned 10 parameters, no?
Yes, I was making the point that having to keep the state of the stack
in mind was error prone in the same way as passing that many parameters
in C. It's also annoying to document. In C, I don't have to say what
the values of the local varables are--it's clear from the code.
Yes, it is error prone in the same way adding numbers is to a fourth
grader. So use a calculator... but that's actually slower and can't be
done if you don't have a calculator! That's the analogy I would use.
Dealing with the stack is trivial if you make a small effort.
Fortunately I don't need to fight that particular war, because there are
excellent C++ implementations for just about everything.

"Back when I was young, we used to defrag hard disks by hand, with
magnets."
Post by rickman
Once I was in a discussion about dealing with the problems of debugging
stack errors which usually are a mismatch between the number of
parameters passed to/from and the number the definition is actually
using. This is exactly the sort of problem a compiler can check, but
typically is not done in Forth. Jeff Fox simply said something like,
this proves the programmer can't count. I realized how simple the truth
is. When considered in the context of how Forth programs are debugged
this is simply not a problem worth dealing with by the compiler. If you
learn more about Forth you will see that.
The stack is not the problem.
Fanbois always say that stuff. It's dumb. A C fanboi would probably
make the same crack about someone who got two parameters backwards in
that 10-parameter function we were talking about. "C does what you tell
it, so if you get it wrong, you're a poopyhead who doesn't have The
Right Stuff like us 733t h4x0r$."

The complexity of software is bad enough without that sort of nonsense,
from whichever side. Time is money, so if you have a compiler that
catches errors for you, use it. Doing otherwise is pure fanboiism.
(Nice coinage, that.) ;)
Post by rickman
Post by Phil Hobbs
Post by rickman
I get that you don't fully understand Forth. When I said "The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues" by other trivial issues I was
referring to the use of the stack.
Well, the fact that you think of Forth's main wart as a trivial issue is
probably why you like it. ;)
Yes, I expect you would call this a wart too...
https://k30.kn3.net/AB653626F.jpg
I think Forth's biggest problem is people who can't see the beauty for
the mark.
Well, that poor girl unfortunately wasn't so pretty inside. The mark
had very little to do with it.

Maybe if I had more alcohol it would help me see the inner beauty of
Forth. Dunno if it would last though. As the wise man said, "I came
home at 2 with a 10, and woke up at 10 with a 2." ;)

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
rickman
2017-08-07 23:02:15 UTC
Permalink
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Tom Gardner
Post by rickman
Post by John Devereux
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff
that's hard
to pick up
on your own, i.e. math and the more mathematical parts of
engineering
(especially signals & systems and electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much
difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it
since
I was a
teenager. I picked up good habits from reading books and other
people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition,
with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are
a whole
lot of
libraries available (for Python especially) so a determined
beginner
can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Post by Phil Hobbs
Seems as though youngsters mostly start with Python and then
start in
on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for
the
more
ambitious) something like BeagleBone or (a fave) LPCxpresso.
Most
of my
embedded work is pretty light-duty, so an M3 or M4 is good
medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or
HDL,
so I do
only enough embedded things to get the whole instrument working.
Fancy
embedded
stuff I either leave to the experts, do in hardware, or hive off
to an
outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
Post by Phil Hobbs
It's certainly true that things get complicated fast, but they
did in
the old
days too. Of course the reasons are different: nowadays it's the
sheer
complexity of the silicon and the tools, whereas back then it was
burn-and-crash
development, flaky in-system emulators, and debuggers which (if
they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for
decades.
Why is this important in other languages now?
It isn't important.
What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.
That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
That's the point. Forth is one of the simplest development tools you
will ever find. It also has some of the least constraints. The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues.
I used to program in RPN routinely, still use RPN calculators
exclusively, and don't like Forth. Worrying about the state of the
stack is something I much prefer to let the compiler deal with. It's
like C functions with ten positional parameters.
If you are writing Forth code and passing 10 items into a definition,
you have missed a *lot* on how to write Forth code. I can see why you
are frustrated.
I'm not frustrated, partly because I haven't written anything in Forth
for over 30 years. ;)
And I didn't say I was passing 10 parameters to a Forth word, either.
It's just that having to worry about the state of the stack is so 1975.
I wrote my last HP calculator program in the early '80s, and have no
burning desire to do that again either.
You clearly mentioned 10 parameters, no?
Yes, I was making the point that having to keep the state of the stack
in mind was error prone in the same way as passing that many parameters
in C. It's also annoying to document. In C, I don't have to say what
the values of the local varables are--it's clear from the code.
Yes, it is error prone in the same way adding numbers is to a fourth
grader. So use a calculator... but that's actually slower and can't be
done if you don't have a calculator! That's the analogy I would use.
Dealing with the stack is trivial if you make a small effort.
Fortunately I don't need to fight that particular war, because there are
excellent C++ implementations for just about everything.
"Back when I was young, we used to defrag hard disks by hand, with magnets."
Post by rickman
Once I was in a discussion about dealing with the problems of debugging
stack errors which usually are a mismatch between the number of parameters
passed to/from and the number the definition is actually using. This is
exactly the sort of problem a compiler can check, but typically is not
done in Forth. Jeff Fox simply said something like, this proves the
programmer can't count. I realized how simple the truth is. When
considered in the context of how Forth programs are debugged this is
simply not a problem worth dealing with by the compiler. If you learn
more about Forth you will see that.
The stack is not the problem.
Fanbois always say that stuff. It's dumb. A C fanboi would probably make
the same crack about someone who got two parameters backwards in that
10-parameter function we were talking about. "C does what you tell it, so
if you get it wrong, you're a poopyhead who doesn't have The Right Stuff
like us 733t h4x0r$."
The complexity of software is bad enough without that sort of nonsense, from
whichever side. Time is money, so if you have a compiler that catches
errors for you, use it. Doing otherwise is pure fanboiism. (Nice coinage,
that.) ;)
Post by rickman
Post by Phil Hobbs
Post by rickman
I get that you don't fully understand Forth. When I said "The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues" by other trivial issues I was
referring to the use of the stack.
Well, the fact that you think of Forth's main wart as a trivial issue is
probably why you like it. ;)
Yes, I expect you would call this a wart too...
https://k30.kn3.net/AB653626F.jpg
I think Forth's biggest problem is people who can't see the beauty for the
mark.
Well, that poor girl unfortunately wasn't so pretty inside. The mark had
very little to do with it.
Maybe if I had more alcohol it would help me see the inner beauty of Forth.
Dunno if it would last though. As the wise man said, "I came home at 2 with
a 10, and woke up at 10 with a 2." ;)
You have a strange perspective on life.
--
Rick C
Phil Hobbs
2017-08-07 23:08:02 UTC
Permalink
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Tom Gardner
Post by rickman
Post by John Devereux
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff
that's hard
to pick up
on your own, i.e. math and the more mathematical parts of
engineering
(especially signals & systems and electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much
difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been
doing it
since
I was a
teenager. I picked up good habits from reading books
and other
people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition,
with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are
a whole
lot of
libraries available (for Python especially) so a determined
beginner
can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Post by Phil Hobbs
Seems as though youngsters mostly start with Python and then
start in
on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or
(for
the
more
ambitious) something like BeagleBone or (a fave) LPCxpresso.
Most
of my
embedded work is pretty light-duty, so an M3 or M4 is good
medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or
HDL,
so I do
only enough embedded things to get the whole instrument working.
Fancy
embedded
stuff I either leave to the experts, do in hardware, or hive off
to an
outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
Post by Phil Hobbs
It's certainly true that things get complicated fast, but they
did in
the old
days too. Of course the reasons are different: nowadays it's the
sheer
complexity of the silicon and the tools, whereas back then it was
burn-and-crash
development, flaky in-system emulators, and debuggers which (if
they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
I don't think that particular criticism is really fair - it seems the
(rather simple) C preprocessor is also "turing complete" or at least
close to it e.g,.
https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete
Or a C prime number generator that mostly uses the preprocessor
https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint
At any rate "Compile-time processing" is a big thing now in
modern
c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has
done
for
decades.
Why is this important in other languages now?
It isn't important.
What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.
That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
That's the point. Forth is one of the simplest development tools you
will ever find. It also has some of the least constraints.
The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues.
I used to program in RPN routinely, still use RPN calculators
exclusively, and don't like Forth. Worrying about the state of the
stack is something I much prefer to let the compiler deal with.
It's
like C functions with ten positional parameters.
If you are writing Forth code and passing 10 items into a definition,
you have missed a *lot* on how to write Forth code. I can see why you
are frustrated.
I'm not frustrated, partly because I haven't written anything in Forth
for over 30 years. ;)
And I didn't say I was passing 10 parameters to a Forth word, either.
It's just that having to worry about the state of the stack is so 1975.
I wrote my last HP calculator program in the early '80s, and have no
burning desire to do that again either.
You clearly mentioned 10 parameters, no?
Yes, I was making the point that having to keep the state of the stack
in mind was error prone in the same way as passing that many parameters
in C. It's also annoying to document. In C, I don't have to say what
the values of the local varables are--it's clear from the code.
Yes, it is error prone in the same way adding numbers is to a fourth
grader. So use a calculator... but that's actually slower and can't be
done if you don't have a calculator! That's the analogy I would use.
Dealing with the stack is trivial if you make a small effort.
Fortunately I don't need to fight that particular war, because there are
excellent C++ implementations for just about everything.
"Back when I was young, we used to defrag hard disks by hand, with magnets."
Post by rickman
Once I was in a discussion about dealing with the problems of debugging
stack errors which usually are a mismatch between the number of parameters
passed to/from and the number the definition is actually using. This is
exactly the sort of problem a compiler can check, but typically is not
done in Forth. Jeff Fox simply said something like, this proves the
programmer can't count. I realized how simple the truth is. When
considered in the context of how Forth programs are debugged this is
simply not a problem worth dealing with by the compiler. If you learn
more about Forth you will see that.
The stack is not the problem.
Fanbois always say that stuff. It's dumb. A C fanboi would probably make
the same crack about someone who got two parameters backwards in that
10-parameter function we were talking about. "C does what you tell it, so
if you get it wrong, you're a poopyhead who doesn't have The Right Stuff
like us 733t h4x0r$."
The complexity of software is bad enough without that sort of
nonsense, from
whichever side. Time is money, so if you have a compiler that catches
errors for you, use it. Doing otherwise is pure fanboiism. (Nice coinage,
that.) ;)
Post by rickman
Post by Phil Hobbs
Post by rickman
I get that you don't fully understand Forth. When I said "The only
people who think it is a bad idea are those who think RPN is a problem
and object to other trivial issues" by other trivial issues I was
referring to the use of the stack.
Well, the fact that you think of Forth's main wart as a trivial issue is
probably why you like it. ;)
Yes, I expect you would call this a wart too...
https://k30.kn3.net/AB653626F.jpg
I think Forth's biggest problem is people who can't see the beauty for the
mark.
Well, that poor girl unfortunately wasn't so pretty inside. The mark had
very little to do with it.
Maybe if I had more alcohol it would help me see the inner beauty of Forth.
Dunno if it would last though. As the wise man said, "I came home at 2 with
a 10, and woke up at 10 with a 2." ;)
You have a strange perspective on life.
It's called "teasing", Rick. ;)

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
Hans-Bernhard Bröker
2017-08-07 23:30:17 UTC
Permalink
[...]
Post by Phil Hobbs
Post by rickman
You have a strange perspective on life.
It's called "teasing", Rick. ;)
Guys, that makes about 300 lines of unmodified quote, for one line of
reply. In other words, you've now dropped to effectively a 1:300
signal-to-noise ratio. So, could either one of you _please_ clip
irrelevant quoted materiel from their replies, at least every once in a
while? Pretty please?
Phil Hobbs
2017-08-07 23:32:37 UTC
Permalink
Post by Hans-Bernhard Bröker
[...]
Post by Phil Hobbs
Post by rickman
You have a strange perspective on life.
It's called "teasing", Rick. ;)
Guys, that makes about 300 lines of unmodified quote, for one line of
reply. In other words, you've now dropped to effectively a 1:300
signal-to-noise ratio. So, could either one of you _please_ clip
irrelevant quoted materiel from their replies, at least every once in a
while? Pretty please?
I think we're probably done--Rick likes Forth, and I don't, is about the
size of it. Sorry about your 300-baud connection. ;)

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
rickman
2017-08-07 23:53:07 UTC
Permalink
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Well, that poor girl unfortunately wasn't so pretty inside. The mark had
very little to do with it.
Maybe if I had more alcohol it would help me see the inner beauty of Forth.
Dunno if it would last though. As the wise man said, "I came home at 2 with
a 10, and woke up at 10 with a 2." ;)
You have a strange perspective on life.
It's called "teasing", Rick. ;)
I guess you hit a nerve calling Monroe ugly inside. I've always felt bad
about the way many people end their lives. If people have broken limbs we
hurry them to the hospital for treatment. When they have mental issues we
tell them they should get some help and even if they do it often isn't of
much value.
--
Rick C
Phil Hobbs
2017-08-08 00:02:20 UTC
Permalink
Post by rickman
Post by Phil Hobbs
Post by rickman
Post by Phil Hobbs
Well, that poor girl unfortunately wasn't so pretty inside. The mark had
very little to do with it.
Maybe if I had more alcohol it would help me see the inner beauty of Forth.
Dunno if it would last though. As the wise man said, "I came home at 2 with
a 10, and woke up at 10 with a 2." ;)
You have a strange perspective on life.
It's called "teasing", Rick. ;)
I guess you hit a nerve calling Monroe ugly inside. I've always felt
bad about the way many people end their lives. If people have broken
limbs we hurry them to the hospital for treatment. When they have
mental issues we tell them they should get some help and even if they do
it often isn't of much value.
I wasn't blaming her for it, because I have no idea how she got to that
place. Getting involved with Frank Sinatra and Jack Kennedy probably
didn't help. For whatever reason, clearly she was very unhappy. But
you're the one who brought in the downer, not I.

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
Stephen Pelc
2017-08-07 23:49:18 UTC
Permalink
Post by rickman
Post by Phil Hobbs
Maybe if I had more alcohol it would help me see the inner beauty of Forth.
Dunno if it would last though. As the wise man said, "I came home at 2 with
a 10, and woke up at 10 with a 2." ;)
You have a strange perspective on life.
Could both of you learn to trim your posts? Then I might read enough
of them to be interested.

Stephen
--
Stephen Pelc, ***@mpeforth.com
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441
web: http://www.mpeforth.com - free VFX Forth downloads
Phil Hobbs
2017-08-08 00:07:12 UTC
Permalink
Post by Stephen Pelc
Post by rickman
Post by Phil Hobbs
Maybe if I had more alcohol it would help me see the inner beauty of Forth.
Dunno if it would last though. As the wise man said, "I came home at 2 with
a 10, and woke up at 10 with a 2." ;)
You have a strange perspective on life.
Could both of you learn to trim your posts? Then I might read enough
of them to be interested.
Stephen
Hit "end" when you load the post. Works in Thunderbird at least.

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
Tom Gardner
2017-08-08 07:00:44 UTC
Permalink
Fanbois always say that stuff. It's dumb. A C fanboi would probably make the
same crack about someone who got two parameters backwards in that 10-parameter
function we were talking about. "C does what you tell it, so if you get it
wrong, you're a poopyhead who doesn't have The Right Stuff like us 733t h4x0r$."
C/C++ fanbois do precisely that, usually quite vehemently :(
The complexity of software is bad enough without that sort of nonsense, from
whichever side. Time is money, so if you have a compiler that catches errors
for you, use it. Doing otherwise is pure fanboiism. (Nice coinage, that.) ;)
Precisely.
George Neuner
2017-08-07 05:21:48 UTC
Permalink
On Sun, 6 Aug 2017 20:13:08 +0100, Tom Gardner
Post by Tom Gardner
Post by John Devereux
At any rate "Compile-time processing" is a big thing now in modern c++,
see e.g.
Compile Time Maze Generator (and Solver)
http://youtu.be/3SXML1-Ty5U
Funny, compile time program execution is something Forth has done for decades.
Why is this important in other languages now?
It isn't important.
What is important is that the (world-expert) design committee
didn't understand (and then refused to believe) the
implications of their proposal.
That indicates the tool is so complex and baroque as to
be incomprehensible - and that is a very bad starting point.
Stupid compiler games aside, macro programming with the full power of
the programming language has been tour de force in Lisp almost since
the beginning - the macro facility that (essentially with only small
modifications) is still in use today was introduced ~1965.

Any coding pattern that is used repeatedly potentially is fodder for a
code generating macro. In the simple case, it can save you shitloads
of typing. In the extreme case macros can create a whole DSL that
lets you mix in code to solve problems that are best thought about
using different syntax or semantics ... without needing yet another
compiler or figuring out how to link things together.

These issues ARE relevant to programmers not working exclusively on
small devices.


Lisp's macro language is Lisp. You need to understand a bit about the
[parsed, pre compilation] AST format ... but Lisp's AST format is
standardized, and once you know it you can write Lisp code to
manipulate it.

Similarly Scheme's macro language is Scheme. Scheme doesn't expose
compiler internals like Lisp - instead Scheme macros work in terms of
pattern recognition and code to be generated in response.

The problem with C++ is that its template language is not C++, but
rather a bastard hybrid of C++ and a denotational markup language. C++
is Turing Complete. The markup language is not TC itself, but it is
recursive, and therefore Turing powerful ["powerful" is not quite the
same as "complete"]. The combination "template language" is, again,
Turing powerful [limited by the markup] ... and damn near
incomprehensible.

YMMV,
George
u***@downunder.com
2017-08-06 14:15:36 UTC
Permalink
On Sun, 6 Aug 2017 10:35:03 +0100, Tom Gardner
Post by Tom Gardner
Post by Tom Gardner
Post by Tom Gardner
Another thing is to concentrate the course work on stuff that's hard to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I was a
teenager. I picked up good habits from reading books and other people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are a whole lot of
libraries available (for Python especially) so a determined beginner can get
something cool working (after a fashion) fairly fast.
<snip>
Post by Tom Gardner
Another example is that C/C++ is routinely used to develop
multi threaded code, e.g. using PThreads. That's despite
C/C++ specifically being unable to guarantee correct
operation on modern machines! Most developers are
What is multithreaded code ?

I can think of two definitions:

* The operating system is running independently scheduled tasks, which
happens to use a shared address space (e.g. Windows NT and later)

* A single task with software implementation task switching between
threads. This typically requires that the software library at least
handles the timer (RTC) clock interrupts as in time sharing systems.
Early examples are ADA running on VAX/VMS, MS-DOS based extenders and
later on early Linux PThread.

If I understand correctly, more modern (past Linux 2.6) actually
implements the PTHread functionality in kernel mode.
Post by Tom Gardner
Threads Cannot be Implemented as a Library
Hans-J. Boehm
HP Laboratories Palo Alto
November 12, 2004 *
In many environments, multi-threaded code is written in a language that
was originally designed without thread support (e.g. C), to which a
library of threading primitives was subsequently added. There appears to
be a general understanding that this is not the right approach. We provide
specific arguments that a pure library approach, in which the compiler is
designed independently of threading issues, cannot guarantee correctness
of the resulting code.
We first review why the approach *almost* works, and then examine some
of the *surprising behavior* it may entail. We further illustrate that there
are very simple cases in which a pure library-based approach seems
*incapable of expressing* an efficient parallel algorithm.
Our discussion takes place in the context of C with Pthreads, since it is
commonly used, reasonably well specified, and does not attempt to
ensure type-safety, which would entail even stronger constraints. The
issues we raise are not specific to that context.
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
Now that there is a lot of multicore processors, this is a really
serious issue.


But again, should multitasking/mutithreading be implemented in a
multitasking OS or in a programming language is a very important
question.

To the OP, what you are going to need in the next 3 to 10 years is
hard to predict.
Tom Gardner
2017-08-06 19:21:14 UTC
Permalink
Post by u***@downunder.com
On Sun, 6 Aug 2017 10:35:03 +0100, Tom Gardner
Post by Tom Gardner
Threads Cannot be Implemented as a Library
Hans-J. Boehm
HP Laboratories Palo Alto
November 12, 2004 *
In many environments, multi-threaded code is written in a language that
was originally designed without thread support (e.g. C), to which a
library of threading primitives was subsequently added. There appears to
be a general understanding that this is not the right approach. We provide
specific arguments that a pure library approach, in which the compiler is
designed independently of threading issues, cannot guarantee correctness
of the resulting code.
We first review why the approach *almost* works, and then examine some
of the *surprising behavior* it may entail. We further illustrate that there
are very simple cases in which a pure library-based approach seems
*incapable of expressing* an efficient parallel algorithm.
Our discussion takes place in the context of C with Pthreads, since it is
commonly used, reasonably well specified, and does not attempt to
ensure type-safety, which would entail even stronger constraints. The
issues we raise are not specific to that context.
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
Now that there is a lot of multicore processors, this is a really
serious issue.
There have been multicore processors for *decades*, and
problems have been surfacing - and being swept under the
carpet for decades.

The only change is that now you can get 32 core embedded
processors for $15.

13 years after Boehm's paper, there are signs that C/C++
might be getting a memory model sometime. The success of
that endeavour is yet to be proven.

Memory models are /difficult/. Even Java, starting from a
clean sheet, had to revise its memory model in the light
of experience.
Post by u***@downunder.com
But again, should multitasking/mutithreading be implemented in a
multitasking OS or in a programming language is a very important
question.
That question is moot, since the multitasking OS is implemented
in a programming language, usually C/C++.
u***@downunder.com
2017-08-07 07:35:26 UTC
Permalink
On Sun, 6 Aug 2017 20:21:14 +0100, Tom Gardner
Post by Tom Gardner
Post by u***@downunder.com
On Sun, 6 Aug 2017 10:35:03 +0100, Tom Gardner
Post by Tom Gardner
Threads Cannot be Implemented as a Library
Hans-J. Boehm
HP Laboratories Palo Alto
November 12, 2004 *
In many environments, multi-threaded code is written in a language that
was originally designed without thread support (e.g. C), to which a
library of threading primitives was subsequently added. There appears to
be a general understanding that this is not the right approach. We provide
specific arguments that a pure library approach, in which the compiler is
designed independently of threading issues, cannot guarantee correctness
of the resulting code.
We first review why the approach *almost* works, and then examine some
of the *surprising behavior* it may entail. We further illustrate that there
are very simple cases in which a pure library-based approach seems
*incapable of expressing* an efficient parallel algorithm.
Our discussion takes place in the context of C with Pthreads, since it is
commonly used, reasonably well specified, and does not attempt to
ensure type-safety, which would entail even stronger constraints. The
issues we raise are not specific to that context.
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
Now that there is a lot of multicore processors, this is a really
serious issue.
There have been multicore processors for *decades*, and
problems have been surfacing - and being swept under the
carpet for decades.
All the pre-1980's multiprocessors that I have seen have been
_asymmetric_ multiprocessors, i.e one CPU running the OS, while the
other CPUs are running application programs.Thus, the OS handled
locking of data.

Of course, there has been cache coherence issues even with a single
processor, such as DMA and interrupts. These issues have been under
control for decades.
Post by Tom Gardner
The only change is that now you can get 32 core embedded
processors for $15.
Those coherence issues should be addressed (sic) by the OS writer, not
the compiler. Why mess with these issues in each and every language,
when it should be done only once at the OS level.
Post by Tom Gardner
13 years after Boehm's paper, there are signs that C/C++
might be getting a memory model sometime. The success of
that endeavour is yet to be proven.
Memory models are /difficult/. Even Java, starting from a
clean sheet, had to revise its memory model in the light
of experience.
Post by u***@downunder.com
But again, should multitasking/mutithreading be implemented in a
multitasking OS or in a programming language is a very important
question.
That question is moot, since the multitasking OS is implemented
in a programming language, usually C/C++.
Usually very low level operations, such as invalidating cache and
interrupt preambles are done in assembler anyway, especially with
very specialized kernel mode instructions.
David Brown
2017-08-07 10:54:11 UTC
Permalink
Post by u***@downunder.com
On Sun, 6 Aug 2017 20:21:14 +0100, Tom Gardner
Post by Tom Gardner
Post by u***@downunder.com
On Sun, 6 Aug 2017 10:35:03 +0100, Tom Gardner
Post by Tom Gardner
Threads Cannot be Implemented as a Library
Hans-J. Boehm
HP Laboratories Palo Alto
November 12, 2004 *
In many environments, multi-threaded code is written in a language that
was originally designed without thread support (e.g. C), to which a
library of threading primitives was subsequently added. There appears to
be a general understanding that this is not the right approach. We provide
specific arguments that a pure library approach, in which the compiler is
designed independently of threading issues, cannot guarantee correctness
of the resulting code.
We first review why the approach *almost* works, and then examine some
of the *surprising behavior* it may entail. We further illustrate that there
are very simple cases in which a pure library-based approach seems
*incapable of expressing* an efficient parallel algorithm.
Our discussion takes place in the context of C with Pthreads, since it is
commonly used, reasonably well specified, and does not attempt to
ensure type-safety, which would entail even stronger constraints. The
issues we raise are not specific to that context.
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
Now that there is a lot of multicore processors, this is a really
serious issue.
There have been multicore processors for *decades*, and
problems have been surfacing - and being swept under the
carpet for decades.
All the pre-1980's multiprocessors that I have seen have been
_asymmetric_ multiprocessors, i.e one CPU running the OS, while the
other CPUs are running application programs.Thus, the OS handled
locking of data.
Of course, there has been cache coherence issues even with a single
processor, such as DMA and interrupts. These issues have been under
control for decades.
Post by Tom Gardner
The only change is that now you can get 32 core embedded
processors for $15.
Those coherence issues should be addressed (sic) by the OS writer, not
the compiler. Why mess with these issues in each and every language,
when it should be done only once at the OS level.
That is one way to look at it. The point of the article above is that
coherence cannot be implemented in C or C++ alone (at the time when it
was written - before C11 and C++11). You need help from the compiler.
You have several options:

1. You can use C11/C++11 features such as fences and synchronisation
atomics.

2. You can use implementation-specific features, such as a memory
barrier like asm volatile("dmb" ::: "m") that will depend on the
compiler and possibly the target.

3. You can use an OS or threading library that includes these
implementation-specific features for you. This is often the easiest,
but you might do more locking than you had to or have other inefficiencies.

4. You cheat, and assume that calling external functions defined in
different units, or using volatiles, etc., can give you what you want.
This usually works until you have more aggressive optimisation enabled.
Note that sometimes OS's use these techniques.

5. You write code that looks right, and works fine in testing, but is
subtly wrong.

6. You disable global interrupts around the awkward bits.


You are correct that this can be done with a compiler that assumes a
single-threaded single-cpu view of the world (as C and C++ did before
2011). You just need the appropriate compiler and target specific
barriers and synchronisation instructions in the right places, and often
putting them in the OS calls is the best place. But compiler support
can make it more efficient and more portable.
Post by u***@downunder.com
Post by Tom Gardner
13 years after Boehm's paper, there are signs that C/C++
might be getting a memory model sometime. The success of
that endeavour is yet to be proven.
Memory models are /difficult/. Even Java, starting from a
clean sheet, had to revise its memory model in the light
of experience.
Post by u***@downunder.com
But again, should multitasking/mutithreading be implemented in a
multitasking OS or in a programming language is a very important
question.
That question is moot, since the multitasking OS is implemented
in a programming language, usually C/C++.
Usually very low level operations, such as invalidating cache and
interrupt preambles are done in assembler anyway, especially with
very specialized kernel mode instructions.
Interrupt preambles and postambles are usually generated by the
compiler, using implementation-specific features like #pragma or
__attribute__ to mark the interrupt function. Cache control and similar
specialised opcodes may often be done using inline assembly rather than
full assembler code, or using compiler-specific intrinsic functions.
David Brown
2017-08-07 10:34:31 UTC
Permalink
Post by Tom Gardner
Post by u***@downunder.com
On Sun, 6 Aug 2017 10:35:03 +0100, Tom Gardner
Post by Tom Gardner
Threads Cannot be Implemented as a Library
Hans-J. Boehm
HP Laboratories Palo Alto
November 12, 2004 *
In many environments, multi-threaded code is written in a language that
was originally designed without thread support (e.g. C), to which a
library of threading primitives was subsequently added. There appears to
be a general understanding that this is not the right approach. We provide
specific arguments that a pure library approach, in which the compiler is
designed independently of threading issues, cannot guarantee correctness
of the resulting code.
We first review why the approach *almost* works, and then examine some
of the *surprising behavior* it may entail. We further illustrate that there
are very simple cases in which a pure library-based approach seems
*incapable of expressing* an efficient parallel algorithm.
Our discussion takes place in the context of C with Pthreads, since it is
commonly used, reasonably well specified, and does not attempt to
ensure type-safety, which would entail even stronger constraints. The
issues we raise are not specific to that context.
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
Now that there is a lot of multicore processors, this is a really
serious issue.
There have been multicore processors for *decades*, and
problems have been surfacing - and being swept under the
carpet for decades.
The only change is that now you can get 32 core embedded
processors for $15.
13 years after Boehm's paper, there are signs that C/C++
might be getting a memory model sometime. The success of
that endeavour is yet to be proven.
C++11 and C11 both have memory models, and explicit coverage of
threading, synchronisation and atomicity.
Post by Tom Gardner
Memory models are /difficult/. Even Java, starting from a
clean sheet, had to revise its memory model in the light
of experience.
Post by u***@downunder.com
But again, should multitasking/mutithreading be implemented in a
multitasking OS or in a programming language is a very important
question.
That question is moot, since the multitasking OS is implemented
in a programming language, usually C/C++.
Phil Hobbs
2017-08-07 16:43:44 UTC
Permalink
Post by u***@downunder.com
Now that there is a lot of multicore processors, this is a really
serious issue.
But again, should multitasking/mutithreading be implemented in a
multitasking OS or in a programming language is a very important
question.
To the OP, what you are going to need in the next 3 to 10 years is
hard to predict.
The old Linux threads library used heavyweight processes to mimic
lightweight threads. That's a mess. Pthreads is much nicer.

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
rickman
2017-08-06 16:50:03 UTC
Permalink
Post by Tom Gardner
Post by Tom Gardner
Post by Tom Gardner
Another thing is to concentrate the course work on stuff that's hard to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I was a
teenager. I picked up good habits from reading books and other people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are a whole lot of
libraries available (for Python especially) so a determined beginner can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Not sure what you mean by "sandy". Like walking on sand where every step is
extra work, like sand getting into everything, like sea shore sand washing
away in a storm?

That is one of the things Hugh did right, he came up with a novice package
that allowed a beginner to become more productive than if they had to write
all that code themselves. He just has trouble understanding that his way
isn't the only way.
Post by Tom Gardner
Another example is that C/C++ is routinely used to develop
multi threaded code, e.g. using PThreads. That's despite
C/C++ specifically being unable to guarantee correct
operation on modern machines! Most developers are
Threads Cannot be Implemented as a Library
Hans-J. Boehm
HP Laboratories Palo Alto
November 12, 2004 *
In many environments, multi-threaded code is written in a language that
was originally designed without thread support (e.g. C), to which a
library of threading primitives was subsequently added. There appears to
be a general understanding that this is not the right approach. We provide
specific arguments that a pure library approach, in which the compiler is
designed independently of threading issues, cannot guarantee correctness
of the resulting code.
We first review why the approach *almost* works, and then examine some
of the *surprising behavior* it may entail. We further illustrate that there
are very simple cases in which a pure library-based approach seems
*incapable of expressing* an efficient parallel algorithm.
Our discussion takes place in the context of C with Pthreads, since it is
commonly used, reasonably well specified, and does not attempt to
ensure type-safety, which would entail even stronger constraints. The
issues we raise are not specific to that context.
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
Sounds like Forth where it is up to the programmer to make sure the code is
written correctly.
--
Rick C
Phil Hobbs
2017-08-07 16:36:36 UTC
Permalink
Post by Tom Gardner
Post by Tom Gardner
Post by Tom Gardner
Another thing is to concentrate the course work on stuff that's hard to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I was a
teenager. I picked up good habits from reading books and other people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are a whole lot of
libraries available (for Python especially) so a determined beginner can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Seems as though youngsters mostly start with Python and then start in on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for the more
ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or HDL, so I do
only enough embedded things to get the whole instrument working.
Fancy embedded
stuff I either leave to the experts, do in hardware, or hive off to an outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
It's certainly true that things get complicated fast, but they did in the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
Another example is that C/C++ is routinely used to develop
multi threaded code, e.g. using PThreads. That's despite
C/C++ specifically being unable to guarantee correct
operation on modern machines! Most developers are
Threads Cannot be Implemented as a Library
Hans-J. Boehm
HP Laboratories Palo Alto
November 12, 2004 *
In many environments, multi-threaded code is written in a language that
was originally designed without thread support (e.g. C), to which a
library of threading primitives was subsequently added. There appears to
be a general understanding that this is not the right approach. We provide
specific arguments that a pure library approach, in which the compiler is
designed independently of threading issues, cannot guarantee correctness
of the resulting code.
We first review why the approach *almost* works, and then examine some
of the *surprising behavior* it may entail. We further illustrate that there
are very simple cases in which a pure library-based approach seems
*incapable of expressing* an efficient parallel algorithm.
Our discussion takes place in the context of C with Pthreads, since it is
commonly used, reasonably well specified, and does not attempt to
ensure type-safety, which would entail even stronger constraints. The
issues we raise are not specific to that context.
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
Interesting read, thanks. I started writing multithreaded programs in
1992, using IBM C/Set 2 on OS/2 2.0. My only GUI programs were on OS/2.

My biggest effort to date is a clusterized 3-D EM simulator, which is
multithreaded, multicore, multi-box, on a heterogeneous bunch of Linux
and Windows boxes. (I haven't tested the Windows version in awhile, so
it's probably broken, but it used to work.) It's written in C++ using
the C-with-classes-and-a-few-templates OOP style, which is a really good
match for simulation and instrument control code. The optimizer is a
big Rexx script that functions a lot like a math program. A pal of mine
wrote an EM simulator that plugs into Matlab, but his isn't clusterized
so I couldn't use it.

My stuff is all pthreads, because std::thread didn't exist at the time,
but it does now, so presumably Boehm's input has been taken into account.
Post by Tom Gardner
I still have nightmares about the horribly buggy PIC C17 compiler for the
PIC17C452A, circa 1999. I was using it in an interesting very low cost infrared
imager <http://electrooptical.net#footprints>. I had an ICE, which was a help,
but I spent more time finding bug workarounds than coding.
There are always crap instantiations of tools, but they
can be avoided. I'm more concerned about tools where the
specification prevents good and safe tools.
Eventually when the schedule permitted I ported the code to HiTech C, which was
a vast improvement. Microchip bought HiTech soon thereafter, and PIC C died a
well deserved but belated death.
My son and I are doing a consulting project together--it's an M4-based
concentrator unit for up to 6 UV/visible/near IR/thermal IR sensors for a fire
prevention company. He just got the SPI interrupt code working down on the
metal a couple of minutes ago. It's fun when your family understands what you
do. :)
Lucky you -- I think! I've never been convinced of the
wisdom of mixing work and home life, and family businesses
seem to be the source material for reality television :)
Well, being Christians helps, as does being fond of each other. He'll
probably want to start running the business after he finishes grad
school--I want to be like Zelazny's character Dworkin, who spends his
time casually altering the structure of reality in his dungeon. ;)

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
Tom Gardner
2017-08-07 17:40:17 UTC
Permalink
Post by Phil Hobbs
Post by Tom Gardner
Post by Tom Gardner
Post by Tom Gardner
Another thing is to concentrate the course work on stuff that's hard to pick up
on your own, i.e. math and the more mathematical parts of engineering
(especially signals & systems and electrodynamics).
Agreed.
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks,
(which has an impressively high signal-to-noise ratio), or
watch the news (which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since I was a
teenager. I picked up good habits from reading books and other people's code.
Yes, but it was easier back then: the tools, problems
and solutions were, by and large, much simpler and more
self-contained.
I'm not so sure. Debuggers have improved out of all recognition, with two
exceptions (gdb and Arduino, I'm looking at you). Plus there are a whole lot of
libraries available (for Python especially) so a determined beginner can get
something cool working (after a fashion) fairly fast.
Yes, that's all true. The speed of getting something going
is important for a beginner. But if the foundation is "sandy"
then it can be necessary and difficult to get beginners
(and managers) to appreciate the need to progress to tools
with sounder foundations.
The old time "sandy" tool was Basic. While Python is much
better than Basic, it is still "sandy" when it comes to
embedded real time applications.
Seems as though youngsters mostly start with Python and then start in on either
webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for the more
ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my
embedded work is pretty light-duty, so an M3 or M4 is good medicine.
I'm much
better at electro-optics and analog/RF circuitry than at MCUs or HDL, so I do
only enough embedded things to get the whole instrument working.
Fancy embedded
stuff I either leave to the experts, do in hardware, or hive off to an outboard
computer via USB serial, depending on the project.
I wish more people took that attitude!
It's certainly true that things get complicated fast, but they did in the old
days too. Of course the reasons are different: nowadays it's the sheer
complexity of the silicon and the tools, whereas back then it was burn-and-crash
development, flaky in-system emulators, and debuggers which (if they even
existed) were almost as bad as Arduino.
Agreed. The key difference is that with simple-but-unreliable
tools it is possible to conceive that mortals can /understand/
the tools limitations, and know when/where the tool is failing.
That simply doesn't happen with modern tools; even the world
experts don't understand their complexity! Seriously.
Consider C++. The *design committee* refused to believe C++
templates formed a Turing-complete language inside C++.
They were forced to recant when shown a correct valid C++
program that never completed compilation - because, during
compilation the compiler was (slowly) emitting the sequence
of prime numbers! What chance have mere mortal developers
got in the face of that complexity.
References, containing the red-flag words "discovered" and
"accident", plus some offending code:
"...TMP is something of an accident; it was discovered
during the process of standardizing the C++..."
https://en.wikibooks.org/wiki/C%2B%2B_Programming/Templates/Template_Meta-Programming#History_of_TMP
http://aszt.inf.elte.hu/~gsd/halado_cpp/ch06s04.html#Static-metaprogramming
Post by Phil Hobbs
Post by Tom Gardner
Another example is that C/C++ is routinely used to develop
multi threaded code, e.g. using PThreads. That's despite
C/C++ specifically being unable to guarantee correct
operation on modern machines! Most developers are
Threads Cannot be Implemented as a Library
Hans-J. Boehm
HP Laboratories Palo Alto
November 12, 2004 *
In many environments, multi-threaded code is written in a language that
was originally designed without thread support (e.g. C), to which a
library of threading primitives was subsequently added. There appears to
be a general understanding that this is not the right approach. We provide
specific arguments that a pure library approach, in which the compiler is
designed independently of threading issues, cannot guarantee correctness
of the resulting code.
We first review why the approach *almost* works, and then examine some
of the *surprising behavior* it may entail. We further illustrate that there
are very simple cases in which a pure library-based approach seems
*incapable of expressing* an efficient parallel algorithm.
Our discussion takes place in the context of C with Pthreads, since it is
commonly used, reasonably well specified, and does not attempt to
ensure type-safety, which would entail even stronger constraints. The
issues we raise are not specific to that context.
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
Interesting read, thanks.
It is, isn't it.
Post by Phil Hobbs
I started writing multithreaded programs in
1992, using IBM C/Set 2 on OS/2 2.0. My only GUI programs were on OS/2.
My biggest effort to date is a clusterized 3-D EM simulator, which is
multithreaded, multicore, multi-box, on a heterogeneous bunch of Linux
and Windows boxes. (I haven't tested the Windows version in awhile, so
it's probably broken, but it used to work.) It's written in C++ using
the C-with-classes-and-a-few-templates OOP style, which is a really good
match for simulation and instrument control code. The optimizer is a
big Rexx script that functions a lot like a math program. A pal of mine
wrote an EM simulator that plugs into Matlab, but his isn't clusterized
so I couldn't use it.
My stuff is all pthreads, because std::thread didn't exist at the time,
but it does now, so presumably Boehm's input has been taken into account.
I'm told C/C++12 /finally/ has a memory model, so perhaps
that will (a few decades too late) ameliorate the problem.
We'll see, but I'm not holding my breath.
Post by Phil Hobbs
Well, being Christians helps, as does being fond of each other.
Depends on the Christian :( My maternal grandmother and my
ex's grandmother were avowedly Christian and pretty horrible
specimens to boot. My grandmother used to write poison-pen
letters, my ex's used to viciously play favourites. So, the
"fond of each other" didn't come into play :(
Post by Phil Hobbs
He'll
probably want to start running the business after he finishes grad
school--I want to be like Zelazny's character Dworkin, who spends his
time casually altering the structure of reality in his dungeon. ;)
:)

I'll content myself with defining the structure of reality
to be what I want it to be. (Just like many denizens of this
group :)

And then I'll find a way of going out with my boots on.
David Brown
2017-08-08 08:23:05 UTC
Permalink
Post by Tom Gardner
Post by Phil Hobbs
My stuff is all pthreads, because std::thread didn't exist at the time,
but it does now, so presumably Boehm's input has been taken into account.
I'm told C/C++12 /finally/ has a memory model, so perhaps
that will (a few decades too late) ameliorate the problem.
We'll see, but I'm not holding my breath.
You mean C11/C++11 ? There are, I believe, very minor differences
between the memory models of C11 and C++11, but they are basically the
same. And they provide the required synchronisation and barrier
mechanisms in a standard form. Whether people will use them
appropriately or not, is another matter. In the embedded world there
seems to be a fair proportion of people that still think C89 is a fine
standard to use. Standard atomics and fences in embedded C basically
means gcc 4.9 or newer, when C11 support was complete. For C++ it was a
little earlier. I don't know what other C or C++ compilers for embedded
use have C11/C++11 support, but gcc is the main one, especially for
modern standards support. GNU ARM Embedded had 4.9 at the end of 2014,
but it takes time for manufacturer-supplied toolchains to update.

So yes, C11/C++11 solves the problem in a standardised way - but it will
certainly take time before updated tools are in common use, and before
people make use of the new features. I suspect this will happen mainly
in the C++ world, where C++11 is a very significant change from older
C++ and it can make sense to move to C++11 almost as a new language.
Even then, I expect most people will either rely on their OS primitives
to handle barriers and fences, or use simple full barriers:

C11:
atomic_thread_fence(memory_order_seq_cst);

C++11:
std::atomic_thread_fence(std::memory_order_seq_cst);

replacing

gcc Cortex-M:
asm volatile("dmb" : : : "memory");

Linux:
mb()


The tools have all existed, even though C and C++ did not have memory
models before C11/C++11. cpus, OS's, and compilers all had memory
models, even though they might not have been explicitly documented.

And people got some things right, and some things wrong, at that time.
I think the same thing will apply now that they /have/ memory models.
Tom Gardner
2017-08-08 08:46:40 UTC
Permalink
Post by David Brown
Post by Tom Gardner
Post by Phil Hobbs
My stuff is all pthreads, because std::thread didn't exist at the time,
but it does now, so presumably Boehm's input has been taken into account.
I'm told C/C++12 /finally/ has a memory model, so perhaps
that will (a few decades too late) ameliorate the problem.
We'll see, but I'm not holding my breath.
You mean C11/C++11 ?
Oh.... picky picky picky :)
Post by David Brown
Whether people will use them
appropriately or not, is another matter.
My experience is that they won't. That's for two reasons:
1) not really understanding threading/synchronisation
issues, because they are only touched upon in schools.
Obviously that problem is language agnostic.
2) any subtleties in the C/C++ specification and
implementation "suboptimalities"; I expect those will
exist :(

Plus, of course, as you note below...
Post by David Brown
In the embedded world there
seems to be a fair proportion of people that still think C89 is a fine
standard to use.
...
Post by David Brown
So yes, C11/C++11 solves the problem in a standardised way - but it will
certainly take time before updated tools are in common use, and before
people make use of the new features.
ISTR that in the early-mid naughties there was a triumphant
announcement of the first /complete/ C or C++ compiler - 5
or 6 years after the standard was published! Of course many
compilers had implemented a usable subset before that.

No, didn't save a reference :(
Post by David Brown
And people got some things right, and some things wrong, at that time.
I think the same thing will apply now that they /have/ memory models.
Agreed.

I'm gobsmacked that it took C/C++ so long to get around to
that /fundamental/ requirement. The absence and the delay
reflects very badly on the C/C++ community.
David Brown
2017-08-08 09:26:29 UTC
Permalink
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
Post by Phil Hobbs
My stuff is all pthreads, because std::thread didn't exist at the time,
but it does now, so presumably Boehm's input has been taken into account.
I'm told C/C++12 /finally/ has a memory model, so perhaps
that will (a few decades too late) ameliorate the problem.
We'll see, but I'm not holding my breath.
You mean C11/C++11 ?
Oh.... picky picky picky :)
Well, if you decide to look this up on google, it should save you a bit
of false starts.
Post by Tom Gardner
Post by David Brown
Whether people will use them
appropriately or not, is another matter.
1) not really understanding threading/synchronisation
issues, because they are only touched upon in schools.
Obviously that problem is language agnostic.
Agreed. This stuff is hard to understand if you want to get correct
/and/ optimally efficient.
Post by Tom Gardner
2) any subtleties in the C/C++ specification and
implementation "suboptimalities"; I expect those will
exist :(
I have read through the specs and implementation information - quite a
bit of work has gone into making it possible to write safe code that is
more efficient than was previously possible (or at least practical). It
is not so relevant for small embedded systems, where you generally have
a single core and little in the way of write buffers - there is not
much, if anything, to be gained by replacing blunt full memory barriers
with tuned load-acquire and store-release operations. But for bigger
systems with multiple cpus, a full barrier can cost hundreds of cycles.

There is one "suboptimality" - the "consume" memory order. It's a bit
weird, in that it is mainly relevant to the Alpha architecture, whose
memory model is so weak that in "x = *p;" it can fetch the contents of
*p before seeing the latest update of p. Because the C11 and C++11
specs are not clear enough on "consume", all implementations (AFAIK)
bump this up to the stronger "acquire", which may be slightly slower on
some architectures.
Post by Tom Gardner
Plus, of course, as you note below...
Post by David Brown
In the embedded world there
seems to be a fair proportion of people that still think C89 is a fine
standard to use.
...
Post by David Brown
So yes, C11/C++11 solves the problem in a standardised way - but it will
certainly take time before updated tools are in common use, and before
people make use of the new features.
ISTR that in the early-mid naughties there was a triumphant
announcement of the first /complete/ C or C++ compiler - 5
or 6 years after the standard was published! Of course many
compilers had implemented a usable subset before that.
Things have changed a good deal since then. The major C++ compilers
(gcc, clang, MSVC) have complete C++11 and C++14 support, with gcc and
clang basically complete on the C++17 final drafts. gcc has "concepts",
slated for C++20 pretty much "as is", and MSVC and clang have prototype
"modules" which are also expected for C++20 (probably based on MSVC's
slightly better version).

<http://en.cppreference.com/w/cpp/compiler_support>

These days a feature does not make it into the C or C++ standards unless
there is a working implementation in at least one major toolchain to
test it out in practice.
Post by Tom Gardner
No, didn't save a reference :(
Post by David Brown
And people got some things right, and some things wrong, at that time.
I think the same thing will apply now that they /have/ memory models.
Agreed.
I'm gobsmacked that it took C/C++ so long to get around to
that /fundamental/ requirement. The absence and the delay
reflects very badly on the C/C++ community.
As I said, people managed fine without it. Putting together a memory
model that the C folks and C++ folks could agree on for all the
platforms they support is not a trivial effort - and I am very glad they
agreed here. Of course I agree that it would have been nice to have had
it earlier. The thread support (as distinct from the atomic support,
including memory models) is far too little, far too late and I doubt if
it will have much use.
Tom Gardner
2017-08-08 10:09:11 UTC
Permalink
Post by David Brown
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
Post by Phil Hobbs
My stuff is all pthreads, because std::thread didn't exist at the time,
but it does now, so presumably Boehm's input has been taken into account.
I'm told C/C++12 /finally/ has a memory model, so perhaps
that will (a few decades too late) ameliorate the problem.
We'll see, but I'm not holding my breath.
You mean C11/C++11 ?
Oh.... picky picky picky :)
Well, if you decide to look this up on google, it should save you a bit
of false starts.
Unfortunately google doesn't prevent idiots from making
tyupos :) (Or is that fortunately?)
Post by David Brown
Post by Tom Gardner
Post by David Brown
Whether people will use them
appropriately or not, is another matter.
1) not really understanding threading/synchronisation
issues, because they are only touched upon in schools.
Obviously that problem is language agnostic.
Agreed. This stuff is hard to understand if you want to get correct
/and/ optimally efficient.
Post by Tom Gardner
2) any subtleties in the C/C++ specification and
implementation "suboptimalities"; I expect those will
exist :(
I have read through the specs and implementation information - quite a
bit of work has gone into making it possible to write safe code that is
more efficient than was previously possible (or at least practical). It
is not so relevant for small embedded systems, where you generally have
a single core and little in the way of write buffers - there is not
much, if anything, to be gained by replacing blunt full memory barriers
with tuned load-acquire and store-release operations. But for bigger
systems with multiple cpus, a full barrier can cost hundreds of cycles.
Agreed, with the caveat that "small" ain't what it used to be.
Consider Zynqs: dual-core ARMs with caches and, obviously, FPGA
fabric.
Consider single 32 core MCUs for £25 one-off. (xCORE)
There are many other examples, and that trend will continue.
Post by David Brown
There is one "suboptimality" - the "consume" memory order. It's a bit
weird, in that it is mainly relevant to the Alpha architecture, whose
memory model is so weak that in "x = *p;" it can fetch the contents of
*p before seeing the latest update of p. Because the C11 and C++11
specs are not clear enough on "consume", all implementations (AFAIK)
bump this up to the stronger "acquire", which may be slightly slower on
some architectures.
One of C/C++'s problems is deciding to cater for, um,
weird and obsolete architectures. I see /why/ they do
that, but on Mondays Wednesdays and Fridays I'd prefer
a concentration on doing common architectures simply
and well.
Post by David Brown
Post by Tom Gardner
ISTR that in the early-mid naughties there was a triumphant
announcement of the first /complete/ C or C++ compiler - 5
or 6 years after the standard was published! Of course many
compilers had implemented a usable subset before that.
Things have changed a good deal since then. The major C++ compilers
(gcc, clang, MSVC) have complete C++11 and C++14 support, with gcc and
clang basically complete on the C++17 final drafts. gcc has "concepts",
slated for C++20 pretty much "as is", and MSVC and clang have prototype
"modules" which are also expected for C++20 (probably based on MSVC's
slightly better version).
<http://en.cppreference.com/w/cpp/compiler_support>
These days a feature does not make it into the C or C++ standards unless
there is a working implementation in at least one major toolchain to
test it out in practice.
Yes, but I presume that was also the case in the naughties.
(I gave up following the detailed C/C++ shenanigans during the
interminable "cast away constness" philosophical discussions)

The point was about the first compiler that (belatedly)
correctly implemented /all/ the features.
Post by David Brown
Post by Tom Gardner
Post by David Brown
And people got some things right, and some things wrong, at that time.
I think the same thing will apply now that they /have/ memory models.
Agreed.
I'm gobsmacked that it took C/C++ so long to get around to
that /fundamental/ requirement. The absence and the delay
reflects very badly on the C/C++ community.
As I said, people managed fine without it. Putting together a memory
model that the C folks and C++ folks could agree on for all the
platforms they support is not a trivial effort - and I am very glad they
agreed here. Of course I agree that it would have been nice to have had
it earlier. The thread support (as distinct from the atomic support,
including memory models) is far too little, far too late and I doubt if
it will have much use.
While there is no doubt people /thought/ they managed,
it is less clear cut that it was "fine".

I'm disappointed that thread support might not be as
useful as desired, but memory model and atomic is more
important.
David Brown
2017-08-08 10:56:37 UTC
Permalink
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
Post by Phil Hobbs
My stuff is all pthreads, because std::thread didn't exist at the time,
but it does now, so presumably Boehm's input has been taken into account.
I'm told C/C++12 /finally/ has a memory model, so perhaps
that will (a few decades too late) ameliorate the problem.
We'll see, but I'm not holding my breath.
You mean C11/C++11 ?
Oh.... picky picky picky :)
Well, if you decide to look this up on google, it should save you a bit
of false starts.
Unfortunately google doesn't prevent idiots from making
tyupos :) (Or is that fortunately?)
Post by David Brown
Post by Tom Gardner
Post by David Brown
Whether people will use them
appropriately or not, is another matter.
1) not really understanding threading/synchronisation
issues, because they are only touched upon in schools.
Obviously that problem is language agnostic.
Agreed. This stuff is hard to understand if you want to get correct
/and/ optimally efficient.
Post by Tom Gardner
2) any subtleties in the C/C++ specification and
implementation "suboptimalities"; I expect those will
exist :(
I have read through the specs and implementation information - quite a
bit of work has gone into making it possible to write safe code that is
more efficient than was previously possible (or at least practical). It
is not so relevant for small embedded systems, where you generally have
a single core and little in the way of write buffers - there is not
much, if anything, to be gained by replacing blunt full memory barriers
with tuned load-acquire and store-release operations. But for bigger
systems with multiple cpus, a full barrier can cost hundreds of cycles.
Agreed, with the caveat that "small" ain't what it used to be.
Consider Zynqs: dual-core ARMs with caches and, obviously, FPGA
fabric.
True. I'd be happy to see people continue to use full memory barriers -
they may not be speed optimal, but they will lead to correct code. Let
those who understand the more advanced synchronisation stuff use
acquire-release. And of course a key point is for people to use RTOS
features when they can - again, using a mutex or semaphore might not be
as efficient as a fancy lock-free algorithm, but it is better to be safe
than fast.
Post by Tom Gardner
Consider single 32 core MCUs for £25 one-off. (xCORE)
The xCORE is a bit different, as is the language you use and the style
of the code. Message passing is a very neat way to swap data between
threads or cores, and is inherently safer than shared memory.
Post by Tom Gardner
There are many other examples, and that trend will continue.
Yes.
Post by Tom Gardner
Post by David Brown
There is one "suboptimality" - the "consume" memory order. It's a bit
weird, in that it is mainly relevant to the Alpha architecture, whose
memory model is so weak that in "x = *p;" it can fetch the contents of
*p before seeing the latest update of p. Because the C11 and C++11
specs are not clear enough on "consume", all implementations (AFAIK)
bump this up to the stronger "acquire", which may be slightly slower on
some architectures.
One of C/C++'s problems is deciding to cater for, um,
weird and obsolete architectures. I see /why/ they do
that, but on Mondays Wednesdays and Fridays I'd prefer
a concentration on doing common architectures simply
and well.
In general, I agree. In this particular case, the Alpha is basically
obsolete - but it is certainly possible that future cpu designs would
have equally weak memory models. Such a weak model is easier to make
faster in hardware - you need less synchronisation, cache snooping, and
other such details.
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
ISTR that in the early-mid naughties there was a triumphant
announcement of the first /complete/ C or C++ compiler - 5
or 6 years after the standard was published! Of course many
compilers had implemented a usable subset before that.
Things have changed a good deal since then. The major C++ compilers
(gcc, clang, MSVC) have complete C++11 and C++14 support, with gcc and
clang basically complete on the C++17 final drafts. gcc has "concepts",
slated for C++20 pretty much "as is", and MSVC and clang have prototype
"modules" which are also expected for C++20 (probably based on MSVC's
slightly better version).
<http://en.cppreference.com/w/cpp/compiler_support>
These days a feature does not make it into the C or C++ standards unless
there is a working implementation in at least one major toolchain to
test it out in practice.
Yes, but I presume that was also the case in the naughties.
No, not to the same extent. Things move faster now, especially in the
C++ world. C++ is on a three year update cycle now. The first ISO
standard was C++98, with C++03 being a minor update 5 years later. It
took until C++11 to get a real new version (with massive changes) - and
now we are getting real, significant improvements every 3 years.
Post by Tom Gardner
(I gave up following the detailed C/C++ shenanigans during the
interminable "cast away constness" philosophical discussions)
The point was about the first compiler that (belatedly)
correctly implemented /all/ the features.
Post by David Brown
Post by Tom Gardner
Post by David Brown
And people got some things right, and some things wrong, at that time.
I think the same thing will apply now that they /have/ memory models.
Agreed.
I'm gobsmacked that it took C/C++ so long to get around to
that /fundamental/ requirement. The absence and the delay
reflects very badly on the C/C++ community.
As I said, people managed fine without it. Putting together a memory
model that the C folks and C++ folks could agree on for all the
platforms they support is not a trivial effort - and I am very glad they
agreed here. Of course I agree that it would have been nice to have had
it earlier. The thread support (as distinct from the atomic support,
including memory models) is far too little, far too late and I doubt if
it will have much use.
While there is no doubt people /thought/ they managed,
it is less clear cut that it was "fine".
I'm disappointed that thread support might not be as
useful as desired, but memory model and atomic is more
important.
The trouble with thread support in C11/C++11 is that it is limited to
very simple features - mutexes, condition variables and simple threads.
But real-world use needs priorities, semaphores, queues, timers, and
many other features. Once you are using RTOS-specific API's for all
these, you would use the RTOS API's for thread and mutexes as well
rather than <thread.h> calls.
Tom Gardner
2017-08-08 14:56:39 UTC
Permalink
Post by David Brown
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
Post by Phil Hobbs
My stuff is all pthreads, because std::thread didn't exist at the time,
but it does now, so presumably Boehm's input has been taken into account.
I'm told C/C++12 /finally/ has a memory model, so perhaps
that will (a few decades too late) ameliorate the problem.
We'll see, but I'm not holding my breath.
You mean C11/C++11 ?
Oh.... picky picky picky :)
Well, if you decide to look this up on google, it should save you a bit
of false starts.
Unfortunately google doesn't prevent idiots from making
tyupos :) (Or is that fortunately?)
Post by David Brown
Post by Tom Gardner
Post by David Brown
Whether people will use them
appropriately or not, is another matter.
1) not really understanding threading/synchronisation
issues, because they are only touched upon in schools.
Obviously that problem is language agnostic.
Agreed. This stuff is hard to understand if you want to get correct
/and/ optimally efficient.
Post by Tom Gardner
2) any subtleties in the C/C++ specification and
implementation "suboptimalities"; I expect those will
exist :(
I have read through the specs and implementation information - quite a
bit of work has gone into making it possible to write safe code that is
more efficient than was previously possible (or at least practical). It
is not so relevant for small embedded systems, where you generally have
a single core and little in the way of write buffers - there is not
much, if anything, to be gained by replacing blunt full memory barriers
with tuned load-acquire and store-release operations. But for bigger
systems with multiple cpus, a full barrier can cost hundreds of cycles.
Agreed, with the caveat that "small" ain't what it used to be.
Consider Zynqs: dual-core ARMs with caches and, obviously, FPGA
fabric.
True. I'd be happy to see people continue to use full memory barriers -
they may not be speed optimal, but they will lead to correct code. Let
those who understand the more advanced synchronisation stuff use
acquire-release. And of course a key point is for people to use RTOS
features when they can - again, using a mutex or semaphore might not be
as efficient as a fancy lock-free algorithm, but it is better to be safe
than fast.
Agreed.
Post by David Brown
Post by Tom Gardner
Consider single 32 core MCUs for £25 one-off. (xCORE)
The xCORE is a bit different, as is the language you use and the style
of the code. Message passing is a very neat way to swap data between
threads or cores, and is inherently safer than shared memory.
Well, you can program xCOREs in C/C++, but I haven't
investigated that on the principle that I want to "kick
the tyres" of xC.

ISTR seeing that the "interface" mechanisms in xC are
shared memory underneath, optionally involving memory
copies. That is plausible since xC interfaces have an
"asynchronous nonblocking" "notify" and "clear
notification" annotations on methods. Certainly they
are convenient to use and get around some pain points
in pure CSP message passing.

I'm currently in two minds as to whether I like
any departure from CSP purity :)
Post by David Brown
Post by Tom Gardner
Post by David Brown
There is one "suboptimality" - the "consume" memory order. It's a bit
weird, in that it is mainly relevant to the Alpha architecture, whose
memory model is so weak that in "x = *p;" it can fetch the contents of
*p before seeing the latest update of p. Because the C11 and C++11
specs are not clear enough on "consume", all implementations (AFAIK)
bump this up to the stronger "acquire", which may be slightly slower on
some architectures.
One of C/C++'s problems is deciding to cater for, um,
weird and obsolete architectures. I see /why/ they do
that, but on Mondays Wednesdays and Fridays I'd prefer
a concentration on doing common architectures simply
and well.
In general, I agree. In this particular case, the Alpha is basically
obsolete - but it is certainly possible that future cpu designs would
have equally weak memory models. Such a weak model is easier to make
faster in hardware - you need less synchronisation, cache snooping, and
other such details.
Reasonable, but given the current fixation on the mirage
of globally-coherent memory, I wonder whether that is a
lost cause.

Sooner or later people will have to come to terms with
non-global memory and multicore processing and (preferably)
message passing. Different abstractions and tools /will/
be required. Why not start now, from a good sound base?
Why hobble next-gen tools with last-gen problems?
Post by David Brown
Post by Tom Gardner
I'm disappointed that thread support might not be as
useful as desired, but memory model and atomic is more
important.
The trouble with thread support in C11/C++11 is that it is limited to
very simple features - mutexes, condition variables and simple threads.
But real-world use needs priorities, semaphores, queues, timers, and
many other features. Once you are using RTOS-specific API's for all
these, you would use the RTOS API's for thread and mutexes as well
rather than <thread.h> calls.
That makes a great deal of sense to me, and it
brings into question how much it is worth bothering
about it in C/C++. No doubt I'll come to my senses
before too long :)
David Brown
2017-08-08 15:11:22 UTC
Permalink
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
Consider single 32 core MCUs for £25 one-off. (xCORE)
The xCORE is a bit different, as is the language you use and the style
of the code. Message passing is a very neat way to swap data between
threads or cores, and is inherently safer than shared memory.
Well, you can program xCOREs in C/C++, but I haven't
investigated that on the principle that I want to "kick
the tyres" of xC.
ISTR seeing that the "interface" mechanisms in xC are
shared memory underneath, optionally involving memory
copies. That is plausible since xC interfaces have an
"asynchronous nonblocking" "notify" and "clear
notification" annotations on methods. Certainly they
are convenient to use and get around some pain points
in pure CSP message passing.
The actual message passing can be done in several ways. IIRC, it will
use shared memory within the same cpu (8 logical cores), and channels
("real" message passing) between cpus.

However, as long as it logically uses message passing then it is up to
the tools to get the details right - it frees the programmer from having
to understand about ordering, barriers, etc.
Post by Tom Gardner
I'm currently in two minds as to whether I like
any departure from CSP purity :)
Post by David Brown
Post by Tom Gardner
Post by David Brown
There is one "suboptimality" - the "consume" memory order. It's a bit
weird, in that it is mainly relevant to the Alpha architecture, whose
memory model is so weak that in "x = *p;" it can fetch the contents of
*p before seeing the latest update of p. Because the C11 and C++11
specs are not clear enough on "consume", all implementations (AFAIK)
bump this up to the stronger "acquire", which may be slightly slower on
some architectures.
One of C/C++'s problems is deciding to cater for, um,
weird and obsolete architectures. I see /why/ they do
that, but on Mondays Wednesdays and Fridays I'd prefer
a concentration on doing common architectures simply
and well.
In general, I agree. In this particular case, the Alpha is basically
obsolete - but it is certainly possible that future cpu designs would
have equally weak memory models. Such a weak model is easier to make
faster in hardware - you need less synchronisation, cache snooping, and
other such details.
Reasonable, but given the current fixation on the mirage
of globally-coherent memory, I wonder whether that is a
lost cause.
Sooner or later people will have to come to terms with
non-global memory and multicore processing and (preferably)
message passing. Different abstractions and tools /will/
be required. Why not start now, from a good sound base?
Why hobble next-gen tools with last-gen problems?
That is /precisely/ the point - if you view it from the other side. A
key way to implement message passing, is to use shared memory underneath
- but you isolate the messy details from the ignorant programmer. If
you have write the message passing library correctly, using features
such as "consume" orders, then the high-level programmer can think of
passing messages while the library and the compiler conspire to give
optimal correct code even on very weak memory model cpus.

You are never going to get away from shared memory systems - for some
kind of multi-threaded applications, it is much, much more efficient
than memory passing. But it would be good if multi-threaded apps used
message passing more often, as it is easier to get correct.
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
I'm disappointed that thread support might not be as
useful as desired, but memory model and atomic is more
important.
The trouble with thread support in C11/C++11 is that it is limited to
very simple features - mutexes, condition variables and simple threads.
But real-world use needs priorities, semaphores, queues, timers, and
many other features. Once you are using RTOS-specific API's for all
these, you would use the RTOS API's for thread and mutexes as well
rather than <thread.h> calls.
That makes a great deal of sense to me, and it
brings into question how much it is worth bothering
about it in C/C++. No doubt I'll come to my senses
before too long :)
Tom Gardner
2017-08-08 15:33:05 UTC
Permalink
Post by David Brown
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
Consider single 32 core MCUs for £25 one-off. (xCORE)
The xCORE is a bit different, as is the language you use and the style
of the code. Message passing is a very neat way to swap data between
threads or cores, and is inherently safer than shared memory.
Well, you can program xCOREs in C/C++, but I haven't
investigated that on the principle that I want to "kick
the tyres" of xC.
ISTR seeing that the "interface" mechanisms in xC are
shared memory underneath, optionally involving memory
copies. That is plausible since xC interfaces have an
"asynchronous nonblocking" "notify" and "clear
notification" annotations on methods. Certainly they
are convenient to use and get around some pain points
in pure CSP message passing.
The actual message passing can be done in several ways. IIRC, it will
use shared memory within the same cpu (8 logical cores), and channels
("real" message passing) between cpus.
However, as long as it logically uses message passing then it is up to
the tools to get the details right - it frees the programmer from having
to understand about ordering, barriers, etc.
Just so.

I'm pretty sure:
- all "pure CSP" message passing uses the xSwitch fabric.
- the xC interfaces use shared memory between cores on
the same tile
- whereas across different tiles they bundle up a memory
copy and transmit that as messages across the xSwitch
fabric.

I can't think of a simpler/better way of achieving
the desired external behaviour.
Post by David Brown
Post by Tom Gardner
Post by David Brown
In general, I agree. In this particular case, the Alpha is basically
obsolete - but it is certainly possible that future cpu designs would
have equally weak memory models. Such a weak model is easier to make
faster in hardware - you need less synchronisation, cache snooping, and
other such details.
Reasonable, but given the current fixation on the mirage
of globally-coherent memory, I wonder whether that is a
lost cause.
Sooner or later people will have to come to terms with
non-global memory and multicore processing and (preferably)
message passing. Different abstractions and tools /will/
be required. Why not start now, from a good sound base?
Why hobble next-gen tools with last-gen problems?
That is /precisely/ the point - if you view it from the other side. A
key way to implement message passing, is to use shared memory underneath
- but you isolate the messy details from the ignorant programmer. If
you have write the message passing library correctly, using features
such as "consume" orders, then the high-level programmer can think of
passing messages while the library and the compiler conspire to give
optimal correct code even on very weak memory model cpus.
You are never going to get away from shared memory systems - for some
kind of multi-threaded applications, it is much, much more efficient
than memory passing. But it would be good if multi-threaded apps used
message passing more often, as it is easier to get correct.
Oh dear. Violent agreement. How boring.
u***@downunder.com
2017-08-08 18:07:30 UTC
Permalink
On Tue, 08 Aug 2017 17:11:22 +0200, David Brown
Post by David Brown
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
Consider single 32 core MCUs for £25 one-off. (xCORE)
When there are a large number of cores/processors available, I would
start a project by assigning a thread/process for each core. Later on
you might have to do some fine adjustments to put multiple threads
into one core or split one thread into multiple cores.
Post by David Brown
Post by Tom Gardner
Post by David Brown
The xCORE is a bit different, as is the language you use and the style
of the code. Message passing is a very neat way to swap data between
threads or cores, and is inherently safer than shared memory.
Well, you can program xCOREs in C/C++, but I haven't
investigated that on the principle that I want to "kick
the tyres" of xC.
ISTR seeing that the "interface" mechanisms in xC are
shared memory underneath, optionally involving memory
copies. That is plausible since xC interfaces have an
"asynchronous nonblocking" "notify" and "clear
notification" annotations on methods. Certainly they
are convenient to use and get around some pain points
in pure CSP message passing.
The actual message passing can be done in several ways. IIRC, it will
use shared memory within the same cpu (8 logical cores), and channels
("real" message passing) between cpus.
However, as long as it logically uses message passing then it is up to
the tools to get the details right - it frees the programmer from having
to understand about ordering, barriers, etc.
Post by Tom Gardner
I'm currently in two minds as to whether I like
any departure from CSP purity :)
Post by David Brown
Post by Tom Gardner
Post by David Brown
There is one "suboptimality" - the "consume" memory order. It's a bit
weird, in that it is mainly relevant to the Alpha architecture, whose
memory model is so weak that in "x = *p;" it can fetch the contents of
*p before seeing the latest update of p. Because the C11 and C++11
specs are not clear enough on "consume", all implementations (AFAIK)
bump this up to the stronger "acquire", which may be slightly slower on
some architectures.
One of C/C++'s problems is deciding to cater for, um,
weird and obsolete architectures. I see /why/ they do
that, but on Mondays Wednesdays and Fridays I'd prefer
a concentration on doing common architectures simply
and well.
In general, I agree. In this particular case, the Alpha is basically
obsolete - but it is certainly possible that future cpu designs would
have equally weak memory models. Such a weak model is easier to make
faster in hardware - you need less synchronisation, cache snooping, and
other such details.
Reasonable, but given the current fixation on the mirage
of globally-coherent memory, I wonder whether that is a
lost cause.
Sooner or later people will have to come to terms with
non-global memory and multicore processing and (preferably)
message passing. Different abstractions and tools /will/
be required. Why not start now, from a good sound base?
Why hobble next-gen tools with last-gen problems?
That is /precisely/ the point - if you view it from the other side. A
key way to implement message passing, is to use shared memory underneath
- but you isolate the messy details from the ignorant programmer. If
you have write the message passing library correctly, using features
such as "consume" orders, then the high-level programmer can think of
passing messages while the library and the compiler conspire to give
optimal correct code even on very weak memory model cpus.
You are never going to get away from shared memory systems - for some
kind of multi-threaded applications, it is much, much more efficient
than memory passing. But it would be good if multi-threaded apps used
message passing more often, as it is easier to get correct.
What is the issue with shared memory systems ? Use unidirectional
FIFOs between threads in shared memory for the actual message. The
real issue how to inform the consuming thread that there is a new
message available in the FIFO.
Post by David Brown
Post by Tom Gardner
Post by David Brown
Post by Tom Gardner
I'm disappointed that thread support might not be as
useful as desired, but memory model and atomic is more
important.
The trouble with thread support in C11/C++11 is that it is limited to
very simple features - mutexes, condition variables and simple threads.
But real-world use needs priorities, semaphores, queues, timers, and
many other features. Once you are using RTOS-specific API's for all
these, you would use the RTOS API's for thread and mutexes as well
rather than <thread.h> calls.
That makes a great deal of sense to me, and it
brings into question how much it is worth bothering
about it in C/C++. No doubt I'll come to my senses
before too long :)
David Brown
2017-08-09 08:03:40 UTC
Permalink
Post by u***@downunder.com
On Tue, 08 Aug 2017 17:11:22 +0200, David Brown
Post by David Brown
Post by Tom Gardner
Post by Tom Gardner
Consider single 32 core MCUs for £25 one-off. (xCORE)
When there are a large number of cores/processors available, I would
start a project by assigning a thread/process for each core. Later on
you might have to do some fine adjustments to put multiple threads
into one core or split one thread into multiple cores.
The XMOS is a bit special - it has hardware multi-threading. The 32
virtual core device has 4 real cores, each with 8 hardware threaded
virtual cores. For hardware threads, you get one thread per virtual core.
Post by u***@downunder.com
Post by David Brown
Post by Tom Gardner
Sooner or later people will have to come to terms with
non-global memory and multicore processing and (preferably)
message passing. Different abstractions and tools /will/
be required. Why not start now, from a good sound base?
Why hobble next-gen tools with last-gen problems?
That is /precisely/ the point - if you view it from the other side. A
key way to implement message passing, is to use shared memory underneath
- but you isolate the messy details from the ignorant programmer. If
you have write the message passing library correctly, using features
such as "consume" orders, then the high-level programmer can think of
passing messages while the library and the compiler conspire to give
optimal correct code even on very weak memory model cpus.
You are never going to get away from shared memory systems - for some
kind of multi-threaded applications, it is much, much more efficient
than memory passing. But it would be good if multi-threaded apps used
message passing more often, as it is easier to get correct.
What is the issue with shared memory systems ? Use unidirectional
FIFOs between threads in shared memory for the actual message. The
real issue how to inform the consuming thread that there is a new
message available in the FIFO.
That is basically how you make a message passing system when you have
shared memory for communication. The challenge for modern systems is
making sure that other cpus see the same view of memory as the sending
one. It is not enough to simply write the message, then update the
head/tail pointers for the FIFO. You have cache coherency, write
re-ordering buffers, out-of-order execution in the cpu, etc., as well as
compiler re-ordering of writes.

It would be nice to see cpus (or chipsets) having better hardware
support for a variety of synchronisation mechanisms, rather than just
"flush all previous writes to memory before doing any new writes"
instructions. Multi-port and synchronised memory is expensive, but
surely it would be possible to have a small amount that could be used
for things like mutexes, semaphores, and the control parts of queues.
u***@downunder.com
2017-08-10 11:30:30 UTC
Permalink
On Wed, 09 Aug 2017 10:03:40 +0200, David Brown
Post by David Brown
Post by u***@downunder.com
On Tue, 08 Aug 2017 17:11:22 +0200, David Brown
Post by David Brown
Post by Tom Gardner
Post by Tom Gardner
Consider single 32 core MCUs for £25 one-off. (xCORE)
When there are a large number of cores/processors available, I would
start a project by assigning a thread/process for each core. Later on
you might have to do some fine adjustments to put multiple threads
into one core or split one thread into multiple cores.
The XMOS is a bit special - it has hardware multi-threading. The 32
virtual core device has 4 real cores, each with 8 hardware threaded
virtual cores. For hardware threads, you get one thread per virtual core.
Post by u***@downunder.com
Post by David Brown
Post by Tom Gardner
Sooner or later people will have to come to terms with
non-global memory and multicore processing and (preferably)
message passing. Different abstractions and tools /will/
be required. Why not start now, from a good sound base?
Why hobble next-gen tools with last-gen problems?
That is /precisely/ the point - if you view it from the other side. A
key way to implement message passing, is to use shared memory underneath
- but you isolate the messy details from the ignorant programmer. If
you have write the message passing library correctly, using features
such as "consume" orders, then the high-level programmer can think of
passing messages while the library and the compiler conspire to give
optimal correct code even on very weak memory model cpus.
You are never going to get away from shared memory systems - for some
kind of multi-threaded applications, it is much, much more efficient
than memory passing. But it would be good if multi-threaded apps used
message passing more often, as it is easier to get correct.
What is the issue with shared memory systems ? Use unidirectional
FIFOs between threads in shared memory for the actual message. The
real issue how to inform the consuming thread that there is a new
message available in the FIFO.
That is basically how you make a message passing system when you have
shared memory for communication. The challenge for modern systems is
making sure that other cpus see the same view of memory as the sending
one. It is not enough to simply write the message, then update the
head/tail pointers for the FIFO. You have cache coherency, write
re-ordering buffers, out-of-order execution in the cpu, etc., as well as
compiler re-ordering of writes.
Sure you have to put the pointers into non-cached memory or into
write-through cache or use some explicit instruction to perform a
cache write-back.

The problem is the granulation of the cache, typically at least a
cache line or a virtual memory page size.

While "volatile" just affects code generation, it would be nice to
have a e.g. "no_cache" keyword to affect run time execution and cache
handling. This would put these variables into special program sections
and let the linker put all variables requiring "no_cache" into the
same cache line or virtual memory page. The actual implementation
could then vary according to hardware implementation.

If usage of some specific shared data is defined as a single producer
thread (with full R/W access) and multiple consumer threads (with read
only access) in a write-back cache system, the producer would activate
the write-trough after each update, while each consumer would
invalidate_cache before any read access, forcing a cache reload before
using the data. The source code would be identical in both producer as
well as consumer threads, but separate binary code could be compiled
for the producer and the consumers.
Post by David Brown
It would be nice to see cpus (or chipsets) having better hardware
support for a variety of synchronisation mechanisms, rather than just
"flush all previous writes to memory before doing any new writes"
instructions.
Is that really so bad limitation ?
Post by David Brown
Multi-port and synchronised memory is expensive, but
surely it would be possible to have a small amount that could be used
for things like mutexes, semaphores, and the control parts of queues.
Any system with memory mapped I/O registers must have a mechanism that
will disable any caching operations for these peripheral I/O
registers. Extending this to some RAM locations should be helpful.

---

BTW, discussing about massively parallel systems with shared memory
resembles the memory mapped file usage with some big data base
engines.

In these systems big (up to terabytes) files are mapped into the
virtual address space. After that, each byte in each memory mapped
file is accessed just as a huge (terabyte) array of bytes (or some
structured type) by simply assignment statements. With files larger
than a few hundred megabytes, a 64 bit processor architecture is
really nice to have :-)

The OS handles loading a segment from the physical disk file into the
memory using the normal OS page fault loading and writeback mechanism.
Instead of accessing the page file, the mechanism access the user data
base files.

Thus you can think about the physical disks as the real memory and the
computer main memory as the L4 cache. Since the main memory is just
one level in the cache hierarchy, there are also similar cache
consistency issues as with other cached systems. In transaction
processing, typically some Commit/Rollback is used.

I guess that designing products around these massively parallel chips,
studying the cache consistency tricks used by memory mapped data base
file systems might be helpful.
David Brown
2017-08-10 13:11:14 UTC
Permalink
Post by u***@downunder.com
On Wed, 09 Aug 2017 10:03:40 +0200, David Brown
Post by David Brown
Post by u***@downunder.com
On Tue, 08 Aug 2017 17:11:22 +0200, David Brown
Post by David Brown
Post by Tom Gardner
Post by Tom Gardner
Consider single 32 core MCUs for £25 one-off. (xCORE)
When there are a large number of cores/processors available, I would
start a project by assigning a thread/process for each core. Later on
you might have to do some fine adjustments to put multiple threads
into one core or split one thread into multiple cores.
The XMOS is a bit special - it has hardware multi-threading. The 32
virtual core device has 4 real cores, each with 8 hardware threaded
virtual cores. For hardware threads, you get one thread per virtual core.
Post by u***@downunder.com
Post by David Brown
Post by Tom Gardner
Sooner or later people will have to come to terms with
non-global memory and multicore processing and (preferably)
message passing. Different abstractions and tools /will/
be required. Why not start now, from a good sound base?
Why hobble next-gen tools with last-gen problems?
That is /precisely/ the point - if you view it from the other side. A
key way to implement message passing, is to use shared memory underneath
- but you isolate the messy details from the ignorant programmer. If
you have write the message passing library correctly, using features
such as "consume" orders, then the high-level programmer can think of
passing messages while the library and the compiler conspire to give
optimal correct code even on very weak memory model cpus.
You are never going to get away from shared memory systems - for some
kind of multi-threaded applications, it is much, much more efficient
than memory passing. But it would be good if multi-threaded apps used
message passing more often, as it is easier to get correct.
What is the issue with shared memory systems ? Use unidirectional
FIFOs between threads in shared memory for the actual message. The
real issue how to inform the consuming thread that there is a new
message available in the FIFO.
That is basically how you make a message passing system when you have
shared memory for communication. The challenge for modern systems is
making sure that other cpus see the same view of memory as the sending
one. It is not enough to simply write the message, then update the
head/tail pointers for the FIFO. You have cache coherency, write
re-ordering buffers, out-of-order execution in the cpu, etc., as well as
compiler re-ordering of writes.
Sure you have to put the pointers into non-cached memory or into
write-through cache or use some explicit instruction to perform a
cache write-back.
You also need the data pointed to in coherent memory of some sort (or
synchronise it explicitly). It does not help if another processor sees
the "data ready" flag become active before the data itself is visible!
Post by u***@downunder.com
The problem is the granulation of the cache, typically at least a
cache line or a virtual memory page size.
No, that is rarely an issue. Most SMP systems have cache snooping for
consistency. It /is/ a problem on non-uniform multi-processing systems.
(And cache lines can lead to cache line thrashing, which is a
performance problem but not a correctness problem.)
Post by u***@downunder.com
While "volatile" just affects code generation, it would be nice to
have a e.g. "no_cache" keyword to affect run time execution and cache
handling. This would put these variables into special program sections
and let the linker put all variables requiring "no_cache" into the
same cache line or virtual memory page. The actual implementation
could then vary according to hardware implementation.
That sounds like a disaster for coupling compilers, linkers, OS's, and
processor MMU setups. I don't see this happening automatically. Doing
so /manually/ - giving explicit sections to variables, and explicitly
configuring an MMU / MPU to make a particular area of the address space
non-cached is fine. I have done it myself on occasion. But that's
different from trying to make it part of the standard language.
Post by u***@downunder.com
If usage of some specific shared data is defined as a single producer
thread (with full R/W access) and multiple consumer threads (with read
only access) in a write-back cache system, the producer would activate
the write-trough after each update, while each consumer would
invalidate_cache before any read access, forcing a cache reload before
using the data. The source code would be identical in both producer as
well as consumer threads, but separate binary code could be compiled
for the producer and the consumers.
That's what atomic access modes and fences are for in C11/C++11.
Post by u***@downunder.com
Post by David Brown
It would be nice to see cpus (or chipsets) having better hardware
support for a variety of synchronisation mechanisms, rather than just
"flush all previous writes to memory before doing any new writes"
instructions.
Is that really so bad limitation ?
For big SMP systems like modern x86 or PPC chips? Yes, it is - these
barriers can cost hundreds of cycles of delay. And if you want the
sequentially consistent barriers (not just acquire/release), so that all
cores see the same order of memory, you need a broadcast that makes
/all/ cores stop and flush all their write queues. (Cache lines don't
need flushed - cache snooping takes care of that already.)

I have used a microcontroller with a dedicated "semaphore" peripheral
block. It was very handy, and very efficient for synchronising between
the two cores.
Post by u***@downunder.com
Post by David Brown
Multi-port and synchronised memory is expensive, but
surely it would be possible to have a small amount that could be used
for things like mutexes, semaphores, and the control parts of queues.
Any system with memory mapped I/O registers must have a mechanism that
will disable any caching operations for these peripheral I/O
registers. Extending this to some RAM locations should be helpful.
Agreed. But that ram would, in practice, be best implemented as a
separate block of fast ram independent from the main system ram. For
embedded systems, a bit of on-chip static ram would make sense.

And note that it is /not/ enough to be uncached - you also need to make
sure that writes are done in order, and that reads are not done
speculatively or out of order.
Post by u***@downunder.com
---
BTW, discussing about massively parallel systems with shared memory
resembles the memory mapped file usage with some big data base
engines.
In these systems big (up to terabytes) files are mapped into the
virtual address space. After that, each byte in each memory mapped
file is accessed just as a huge (terabyte) array of bytes (or some
structured type) by simply assignment statements. With files larger
than a few hundred megabytes, a 64 bit processor architecture is
really nice to have :-)
The OS handles loading a segment from the physical disk file into the
memory using the normal OS page fault loading and writeback mechanism.
Instead of accessing the page file, the mechanism access the user data
base files.
Thus you can think about the physical disks as the real memory and the
computer main memory as the L4 cache. Since the main memory is just
one level in the cache hierarchy, there are also similar cache
consistency issues as with other cached systems. In transaction
processing, typically some Commit/Rollback is used.
There is some saying about any big enough problem in computing being
just an exercise in caching, but I forget the exact quotation.

Serious caching systems are very far from easy to make, ensuring
correctness, convenient use, and efficiency.
Post by u***@downunder.com
I guess that designing products around these massively parallel chips,
studying the cache consistency tricks used by memory mapped data base
file systems might be helpful.
Indeed.
Walter Banks
2017-08-16 22:39:54 UTC
Permalink
Post by David Brown
That sounds like a disaster for coupling compilers, linkers, OS's, and
processor MMU setups. I don't see this happening automatically. Doing
so/manually/ - giving explicit sections to variables, and explicitly
configuring an MMU / MPU to make a particular area of the address space
non-cached is fine. I have done it myself on occasion. But that's
different from trying to make it part of the standard language.
couple comments on this. Compiling for multiple processors I have used
named address spaces to define private and shared space. IEC/ISO 18037
The nice part of that is applications can start out running on a single
platform and then split later with minimum impact on the source code.

Admittedly I have done this on non MMU systems.


I have linked across multiple processors including cases of
heterogeneous processors.

An other comment about inter-processor communication. We found out a
long time ago that dual or multi port memory is not that much of an
advantage in most applications. The data rate can actually be quite low.
We have done quite a few consumer electronics packages with serial data
well below a mbit some as low as 8Kbits/second. It creates skew between
processor execution but generally has very limited impact on application
function or performance.

w..

w..
David Brown
2017-08-17 07:37:12 UTC
Permalink
Post by Walter Banks
Post by David Brown
That sounds like a disaster for coupling compilers, linkers, OS's, and
processor MMU setups. I don't see this happening automatically. Doing
so/manually/ - giving explicit sections to variables, and explicitly
configuring an MMU / MPU to make a particular area of the address space
non-cached is fine. I have done it myself on occasion. But that's
different from trying to make it part of the standard language.
couple comments on this. Compiling for multiple processors I have used
named address spaces to define private and shared space. IEC/ISO 18037
"IEC/ISO 18037" completely misses the point, and is a disaster for the
world of embedded C programming. It is an enormous disappointment to
anyone who programs small embedded systems in C, and it is no surprise
that compiler implementers have almost entirely ignored it in the 15
years of its existence. Named address spaces are perhaps the only
interesting and useful idea there, but the TR does not cover
user-definable address spaces properly.
Post by Walter Banks
The nice part of that is applications can start out running on a single
platform and then split later with minimum impact on the source code.
Admittedly I have done this on non MMU systems.
On some systems, such a "no_cache" keyword/attribute is entirely
possible. My comment is not that this would not be a useful thing, but
that it could not be a part of the C standard language.

For example, on the Nios processor (Altera soft cpu for their FPGAs -
and I don't remember if this was just for the original Nios or the
Nios2) the highest bit of an address was used to indicate "no cache, no
reordering", but it was otherwise unused for address decoding. When you
made a volatile access, the compiler ensured that the highest bit of the
address was set. On that processor, implementing a "no_cache" keyword
would be easy - it was already done for "volatile".

But on a processor that has an MMU? It would be a serious problem. And
how would you handle casts to a no_cache pointer? Casting a pointer to
normal data into a pointer to volatile is an essential operation in lots
of low-level code. (It is implementation-defined behaviour, but works
"as expected" in all compilers I have heard of.)

So for some processors, "no_cache" access is easy. For some, it would
require support from the linker (or at least linker scripts) and MMU
setup, but have no possibility for casts. For others, memory barrier
instructions and cache flush instructions would be the answer. On
larger processors, that could quickly be /very/ expensive - much more so
than an OS call to get some uncached memory (dma_alloc_coherent() on
Linux, for example).

uncached accesses cannot be implemented sensible or efficiently in the
same way on different processors, and in some systems it cannot be done
at all. The concept of cache is alien to the C standards. Any code
that might need uncached memory is inherently low-level and highly
system dependent.

Therefore it is a concept that has no place in the C standards, even
though it is a feature that could be very useful in many specific
implementations for specific targets. A great thing about C is that
there is no problem having such implementation-specific features and
extensions.
Post by Walter Banks
I have linked across multiple processors including cases of
heterogeneous processors.
An other comment about inter-processor communication. We found out a
long time ago that dual or multi port memory is not that much of an
advantage in most applications. The data rate can actually be quite low.
We have done quite a few consumer electronics packages with serial data
well below a mbit some as low as 8Kbits/second. It creates skew between
processor execution but generally has very limited impact on application
function or performance.
w..
w..
Walter Banks
2017-08-17 12:24:18 UTC
Permalink
Post by David Brown
"IEC/ISO 18037" completely misses the point, and is a disaster for
the world of embedded C programming. It is an enormous
disappointment to anyone who programs small embedded systems in C,
and it is no surprise that compiler implementers have almost entirely
ignored it in the 15 years of its existence. Named address spaces
are perhaps the only interesting and useful idea there, but the TR
does not cover user-definable address spaces properly.
Guilty I wrote the section of 18037 on named address spaces based on our
use in consumer applications and earlier WG-14 papers.

We extended the named address space material to also include processor
named space N1351,N1386

The fixed point material in 18037 is in my opinion reasonable.

We use both of these a lot especially in programming the massively
parallel ISA's I have been working on in the last few years.

w..
David Brown
2017-08-17 14:06:17 UTC
Permalink
Post by Walter Banks
Post by David Brown
"IEC/ISO 18037" completely misses the point, and is a disaster for
the world of embedded C programming. It is an enormous
disappointment to anyone who programs small embedded systems in C,
and it is no surprise that compiler implementers have almost entirely
ignored it in the 15 years of its existence. Named address spaces
are perhaps the only interesting and useful idea there, but the TR
does not cover user-definable address spaces properly.
Guilty I wrote the section of 18037 on named address spaces based on our
use in consumer applications and earlier WG-14 papers.
We extended the named address space material to also include processor
named space N1351,N1386
I don't know the details of these different versions of the papers. I
have the 2008 draft of ISO/IEC TR 18037:2008 in front of me.

With all due respect to your work and experience here, I have a good
deal of comments on this paper. Consider it constructive criticism due
to frustration at a major missed opportunity. In summary, TR 18037 is
much like EC++ - a nice idea when you look at the title, but an almost
total waste of time for everyone except compiler company marketing droids.


The basic idea of named address spaces that are syntactically like const
and volatile qualifiers is, IMHO, a good plan. For an example usage,
look at the gcc support for "__flash" address spaces in the AVR port of gcc:

<https://gcc.gnu.org/onlinedocs/gcc/Named-Address-Spaces.html>

The AVR needs different instructions for accessing data in flash and
ram, and address spaces provide a neater and less error-prone solution
than macros or function calls for flash data access.

So far, so good - and if that is your work, then well done. The actual
text of the document could, IMHO, benefit from a more concrete example
usage of address spaces (such as for flash access, as that is likely to
be a very popular usage).


The register storage class stuff, however, is not something I would like
to see in C standards. If I had wanted to mess with specific cpu
registers such as flag registers, I would be programming in assembly. C
is /not/ assembly - we use C so that we don't have to use assembly.
There may be a few specific cases of particular awkward processors for
which it is occasionally useful to have direct access to flag bits -
those are very much in the minority. And they are getting more in the
minority as painful architectures like COP8 and PIC16 are being dropped
in favour of C-friendly processors. It is absolutely fine to put
support for condition code registers (or whatever) into compilers as
target extensions. I can especially see how it can help compiler
implementers to write support libraries in C rather than assembly. But
it is /not/ something to clutter up C standards or for general embedded
C usage.


The disappointing part of named address spaces is in Annex B.1. It is
tantalisingly close to allowing user-defined address spaces with
specific features such as neat access to data stored in other types of
memory. But it is missing all the detail needed to make it work, how
and when it could be used, examples, and all the thought into how it
would interplay with other features of the language. It also totally
ignores some major issues that are very contrary to the spirit and
philosophy of C. When writing C, one expects "x = 1;" to operate
immediately as a short sequence of instructions, or even to be removed
altogether by the compiler optimiser. With a user-defined address
space, such as an SPI eeprom mapping, this could take significant time,
it could interact badly with other code (such as another thread or an
interrupt the is also accessing the SPI bus), it could depend on setup
of things outside the control of the compiler, and it could fail.

You need to think long and hard as to whether this is something
desirable in a C compiler. It would mean giving up the kind of
transparency and low-level predictability that are some of the key
reasons people choose C over C++ for such work. If the convenience of
being able to access different types of data in the same way in code is
worth it, then these issues must be made clear and the mechanisms
developed - if not, then the idea should be dropped. A half-written
half-thought-out annex is not the answer.


One point that is mentioned in Annex B is specific little endian and big
endian access. This is a missed opportunity for the TR - qualifiers
giving explicit endianness to a type would be extremely useful,
completely independently of the named address space concept. Such
qualifiers would be simple to implement on all but the weirdest of
hardware platforms, and would be massively useful in embedded programming.
Post by Walter Banks
The fixed point material in 18037 is in my opinion reasonable.
No, it is crap.

Look at C99. Look what it gave us over C90. One vital feature that
made a huge difference to embedded programming is <stdint.h> with fixed
size integer types. There is no longer any need for every piece of
embedded C software, every library, every RTOS, to define its own types
u16, u16t, uint_16_t, uWORD, RTOS_u16, and whatever. Now we can write
uint16_t and be done with it.

Then someone has come along and written this TR with a total disregard
for this. So /if/ this support gets widely implemented, and /if/ people
start using it, what types will people use? Either they will use
"signed long _Fract" and friends, making for unreadable code due to the
long names and having undocumented target-specific assumptions that make
porting an error prone disaster, or we are going to see a proliferation
of fract15_t, Q31, fp0_15, and a dozen different incompatible variations.

If this was going to be of any use, a set of specific, fixed-size type
names should have been defined from day one. The assorted _Fract and
_Accum types are /useless/. They should not exist. My suggestion for a
naming convention would be uint0q16_t, int7q8_t, etc., for the number of
bits before and after the binary point. Implementations should be free
to implement those that they can handle efficiently, and drop any that
they cannot - but there should be no ambiguity.

This would also avoid the next point - C99 was well established before
the TR was written. What about the "long long" versions for completeness?

Of course, with a sensible explicit naming scheme, as many different
types as you want could exist.


Then there is the control of overflow. It is one thing to say
saturation would be a nice idea - but it is absolutely, totally and
completely /wrong/ to allow this to be controllable by a pragma.
Explicit in the type - yes, that's fine. Implicit based on what
preprocessing directives happen to have passed before that bit of the
source code is translated? Absolutely /not/.

Equally, pragmas for precision and rounding - in fact, pragmas in
general - are a terrible idea. Should the types behave differently in
different files in the same code?


Next up - fixed point constants. Hands up all those that think it is
intuitive that 0.5uk makes it obvious that this is an "unsigned _Accum"
constant? Write it as "(uint15q16_t) 0.5" instead - make it clear and
explicit. The fixed point constant suffixes exist purely because
someone thought there should be suffixes and picked some letters out of
their hat. Oh, and for extra fun lets make these suffixes subtly
different from the conversion specifiers for printf. You remember?
that function that is already too big, slow and complicated for many
embedded C systems.


Then there is the selection of functions in <stdfix.h>. We have
type-generic maths support in C99. There is no place for individual
functions like abshr, abslr, abshk, abslk - a single type-generic absfx
would do the job. We don't /need/ these underlying functions. The
implementation may have them, but C programmers don't need to see that
mess. Hide it away as implementation details. That would leave
everything much simpler to describe, and much simpler to use, and mean
it will work with explicit names for the types.


And in the thirteen years that it has taken between this TR being first
published, and today, when implementations are still rare, incomplete
and inefficient, we now have microcontrollers that will do floating
point quickly for under a dollar. Fixed point is rapidly becoming of
marginal use or even irrelevant.


As for the hardware IO stuff, the less said about that the better. It
will /never/ be used. It has no benefits over the system used almost
everywhere today - volatile accesses through casted constant addresses.


The TR has failed to give the industry anything that embedded C
programmers need, it has made suggestions that are worse than useless,
and by putting in so much that is not helpful it has delayed any hope of
implementation and standardisation for the ideas that might have been
helpful.
Post by Walter Banks
We use both of these a lot especially in programming the massively
parallel ISA's I have been working on in the last few years.
Implementation-specific extensions are clearly going to be useful for
odd architectures like this. It is the attempt at standardisation in
the TR that is a total failure.
Walter Banks
2017-08-17 16:15:44 UTC
Permalink
Post by David Brown
The AVR needs different instructions for accessing data in flash and
ram, and address spaces provide a neater and less error-prone
solution than macros or function calls for flash data access.
So far, so good - and if that is your work, then well done. The
actual text of the document could, IMHO, benefit from a more concrete
example usage of address spaces (such as for flash access, as that is
likely to be a very popular usage).
The named address space stuff is essentially all mine.
Post by David Brown
The register storage class stuff, however, is not something I would
like to see in C standards. If I had wanted to mess with specific
CPU registers such as flag registers, I would be programming in
assembly. C is/not/ assembly - we use C so that we don't have to
use assembly. There may be a few specific cases of particular
awkward processors for which it is occasionally useful to have direct
access to flag bits - those are very much in the minority. And they
are getting more in the minority as painful architectures like COP8
and PIC16 are being dropped in favour of C-friendly processors. It
is absolutely fine to put support for condition code registers (or
whatever) into compilers as target extensions. I can especially see
how it can help compiler implementers to write support libraries in
C rather than assembly. But it is/not/ something to clutter up C
standards or for general embedded C usage.
To be really clear this was a TR and never expected to be added to the C
standards at the time.

In the current environment I would like to see the C standards moved
forward to support the emerging ISA's. There are many current
applications that need additions to the language to describe effective
solutions to some problems. Ad-hoc additions prevent the very thing that
C is promoted for, that is portability. C standards are supposed to
codify existing practice and so often the politics of standards become
arguments about preserving old standards rather than support for newer
processor technology. I know from what I have been doing is both the
spirit and approach to code development in C can deal with changes in
applications and processor technology.

So many of the development tools still are restricted by the technology
limits of development environments of 40 years ago.
Post by David Brown
The disappointing part of named address spaces is in Annex B.1. It
is tantalisingly close to allowing user-defined address spaces with
specific features such as neat access to data stored in other types
of memory. But it is missing all the detail needed to make it work,
how and when it could be used, examples, and all the thought into
how it would interplay with other features of the language. It also
totally ignores some major issues that are very contrary to the
spirit and philosophy of C. When writing C, one expects "x = 1;" to
operate immediately as a short sequence of instructions, or even to
be removed altogether by the compiler optimiser. With a
user-defined address space, such as an SPI eeprom mapping, this could
take significant time, it could interact badly with other code (such
as another thread or an interrupt the is also accessing the SPI bus),
it could depend on setup of things outside the control of the
compiler, and it could fail.
The named address space has often been used to support diverse forms
of memory. To use your example x = 1; The choice where x is located
and how it is accessed is made where it is declared. How it is handled
after that is made by the compiler. The assumption is that the code
is written with functional intent.

As valid as the SPI conflict is it is a strawman in practice. C is
filled with undefined and ambiguous cases and this type of potential
problem in practice is very rare.
Post by David Brown
You need to think long and hard as to whether this is something
desirable in a C compiler.
I have and it is. Once I passed the general single address space
C model named address space opened a level of flexibility that
allows C to be used in a variety of application environments
that conventional C code does not work well for.
Post by David Brown
It would mean giving up the kind of transparency and low-level
predictability that are some of the key reasons people choose C over
C++ for such work. If the convenience of being able to access
different types of data in the same way in code is worth it, then
these issues must be made clear and the mechanisms developed - if
not, then the idea should be dropped. A half-written
half-thought-out annex is not the answer.
I buy the documentation point.

From a usage point I disagree. Writing an application program that can
be spread over many processors is a good example. In the two decades
since this work was initially done things have changed considerably from
consumer products that distributed an application over 3 or 4
processors. (after initially prototyping on a single processor). In
these processor usage was almost always manually allocation using
geographical centers of reference.

This has evolved to compiler analysis that automate this whole process
over many many processors.

w..
David Brown
2017-08-18 12:45:27 UTC
Permalink
Post by Walter Banks
Post by David Brown
The AVR needs different instructions for accessing data in flash and
ram, and address spaces provide a neater and less error-prone solution
than macros or function calls for flash data access.
So far, so good - and if that is your work, then well done. The
actual text of the document could, IMHO, benefit from a more concrete
example usage of address spaces (such as for flash access, as that is
likely to be a very popular usage).
The named address space stuff is essentially all mine.
Post by David Brown
The register storage class stuff, however, is not something I would
like to see in C standards. If I had wanted to mess with specific CPU
registers such as flag registers, I would be programming in assembly.
C is/not/ assembly - we use C so that we don't have to use assembly.
There may be a few specific cases of particular
awkward processors for which it is occasionally useful to have direct
access to flag bits - those are very much in the minority. And they
are getting more in the minority as painful architectures like COP8
and PIC16 are being dropped in favour of C-friendly processors. It
is absolutely fine to put support for condition code registers (or
whatever) into compilers as target extensions. I can especially see
how it can help compiler implementers to write support libraries in
C rather than assembly. But it is/not/ something to clutter up C
standards or for general embedded C usage.
To be really clear this was a TR and never expected to be added to the C
standards at the time.
I assume that it was hoped to become an addition to the C standards, or
at least a basis and inspiration for such additions - otherwise what was
the point? I would be quite happy with the idea of "supplementary"
standards to go along with the main C standards, to add features or to
provide a common set of implementation-dependent features. For example,
Posix adds a number of standard library functions, and gives guarantees
about the size and form of integers - thus people can write code that is
portable to Posix without imposing requirements on compilers for an
8051. A similar additional standard giving features for embedded
developers, but without imposing requirements on PC programmers, would
make sense.
Post by Walter Banks
In the current environment I would like to see the C standards moved
forward to support the emerging ISA's. There are many current
applications that need additions to the language to describe effective
solutions to some problems. Ad-hoc additions prevent the very thing that
C is promoted for, that is portability.
C is intended to support two significantly different types of code. One
is portable code that can run on a wide range of systems. The other is
system-specific code that is targeted at a very small number of systems.
If you are writing code that depends on features of a particular ISA,
then you should be using target-specific or implementation-dependent
features.

If a new feature is useful across a range of targets, then sometimes a
middle ground would make more sense. The C standards today have that in
the form of optional features. For example, most targets support nice
8-bit, 16-bit, 32-bit and 64-bit integers with two's complement
arithmetic. But some targets do not support them. So C99 and C11 give
standard names and definitions of these types, but make them optional.
This works well for features that many targets can support, and many
people would have use of.

For features that are useful on only a small number of ISA's, they
should not be in the main C standards at all - a supplementary standard
would make more sense. Yes, that would mean fragmenting the C world
somewhat - but I think it would still be a better compromise.


Incidentally, can you say anything about these "emerging ISA's" and the
features needed? I fully understand if you cannot give details in
public (of course, you'll need to do so some time if you want them
standardised!).
Post by Walter Banks
C standards are supposed to
codify existing practice and so often the politics of standards become
arguments about preserving old standards rather than support for newer
processor technology.
That is a major point of them, yes.
Post by Walter Banks
I know from what I have been doing is both the
spirit and approach to code development in C can deal with changes in
applications and processor technology.
So many of the development tools still are restricted by the technology
limits of development environments of 40 years ago.
It is the price of backwards compatibility. Like most C programmers, I
have my own ideas of what is "old cruft" that could be removed from C
standards without harm to current users. And like most C programmers,
my ideas about what is "old cruft" will include things that some other C
programmers still use to this day.
Post by Walter Banks
Post by David Brown
The disappointing part of named address spaces is in Annex B.1. It is
tantalisingly close to allowing user-defined address spaces with
specific features such as neat access to data stored in other types of
memory. But it is missing all the detail needed to make it work, how
and when it could be used, examples, and all the thought into
how it would interplay with other features of the language. It also
totally ignores some major issues that are very contrary to the spirit
and philosophy of C. When writing C, one expects "x = 1;" to operate
immediately as a short sequence of instructions, or even to be removed
altogether by the compiler optimiser. With a
user-defined address space, such as an SPI eeprom mapping, this could
take significant time, it could interact badly with other code (such
as another thread or an interrupt the is also accessing the SPI bus),
it could depend on setup of things outside the control of the
compiler, and it could fail.
The named address space has often been used to support diverse forms
of memory. To use your example x = 1; The choice where x is located
and how it is accessed is made where it is declared. How it is handled
after that is made by the compiler. The assumption is that the code
is written with functional intent.
Yes, that is the nice thing about named address spaces here.
Post by Walter Banks
As valid as the SPI conflict is it is a strawman in practice. C is
filled with undefined and ambiguous cases and this type of potential
problem in practice is very rare.
I don't agree. If you first say that named address spaces give a way of
running arbitrary user code for something like "x = 1;", you are making
a very big change in the way C works. And you make it very easy for
programmers to make far-reaching code changes in unexpected ways.

Imagine a program for controlling a music system. You have a global
variable "volume", set in the main loop when the knob is checked, and
read in a timer interrupt that is used to give smooth transition of the
actual volume output (for cool fade-in and fade-out). Somebody then
decides that the volume should be kept in non-volatile memory so that it
is kept over power cycles. Great - you just stick a "_I2CEeprom"
address space qualifier on the definition of "volume". Job done.
Nobody notices that the timer interrupts now take milliseconds instead
of microseconds to run. And nobody - except the unlucky customer -
notices that all hell breaks loose and his speakers are blown when the
volume timer interrupt happens in the middle of a poll of the I2C
temperature sensors.

Now, you can well say that this is all bad program design, or poor
development methodology, or insufficient test procedures. But the point
is that allowing such address space modifiers so simply changes the way
C works - it changes what people expect from C. A C programmer has a
very different expectation from "x = 1;" than "x =
readFromEeprom(address);".

I am /not/ saying the benefits are not worth the costs here - I am
saying this needs to be considered very, very carefully, and features
needed to be viewed in the context of the effects they can cause here.
There are no /right/ answers - but calling it "a strawman in practice"
is very much the /wrong/ answer. Problems that occur very rarely are
the worst kind of problems.


This is a very different case from something like flash access, or
access to ram in different pages, where the access is quite clearly
defined and has definite and predictable timings. You may still have
challenges - if you need to set a "page select register", how do you
ensure that everything works with interrupts that may also use this
address space? But the challenges are smaller, and the benefits greater.
Post by Walter Banks
Post by David Brown
You need to think long and hard as to whether this is something
desirable in a C compiler.
I have and it is. Once I passed the general single address space
C model named address space opened a level of flexibility that
allows C to be used in a variety of application environments
that conventional C code does not work well for.
Post by David Brown
It would mean giving up the kind of transparency and low-level
predictability that are some of the key reasons people choose C over
C++ for such work. If the convenience of being able to access
different types of data in the same way in code is worth it, then
these issues must be made clear and the mechanisms developed - if not,
then the idea should be dropped. A half-written
half-thought-out annex is not the answer.
I buy the documentation point.
From a usage point I disagree. Writing an application program that can
be spread over many processors is a good example.
That is a very different kind of programming from the current
mainstream, and it is questionably as to whether C is a sensible choice
of language for such systems. But okay, let's continue...
Post by Walter Banks
In the two decades
since this work was initially done things have changed considerably from
consumer products that distributed an application over 3 or 4
processors. (after initially prototyping on a single processor). In
these processor usage was almost always manually allocation using
geographical centers of reference.
This has evolved to compiler analysis that automate this whole process
over many many processors.
I assume you are not talking about multi-threaded code working on an SMP
system - that is already possible in C, especially with C11 features
like threading, atomic access, and thread-local data. (Of course more
features might be useful, and just because it is possible does not mean
programmers get things right.)

You are talking about MPPA ("Multi purpose processor array") where you
have many small cores with local memory distributed around a chip, with
communication channels between the nodes.

I would say that named address spaces are not the answer here - the
answer is to drop C, or at least /substantially/ modify it. The XMOS xC
language is an example.

A key point is to allow the definition of a "node" of work with local
data, functions operating in the context of that node, and communication
channels in and out. Nodes should not be able to access data or
functions on other nodes except through the channels, though for
convenience of programming you might allow access to fixed data
(compile-time constants, and functions with no static variables, which
can all be duplicated as needed). Channel-to-channel connections should
ideally be fixed at compile time, allowing the linker/placer/router to
arrange the nodes to match the physical layout of the device.

Lots of fun, but not C as we know it.
Walter Banks
2017-08-25 13:51:34 UTC
Permalink
Post by David Brown
Post by Walter Banks
To be really clear this was a TR and never expected to be added to
the C standards at the time.
I assume that it was hoped to become an addition to the C standards,
or at least a basis and inspiration for such additions - otherwise
what was the point? I would be quite happy with the idea of
"supplementary" standards to go along with the main C standards, to
add features or to provide a common set of implementation-dependent
features. For example, Posix adds a number of standard library
functions, and gives guarantees about the size and form of integers -
thus people can write code that is portable to Posix without imposing
requirements on compilers for an 8051. A similar additional standard
giving features for embedded developers, but without imposing
requirements on PC programmers, would make sense.
Post by Walter Banks
In the current environment I would like to see the C standards
moved forward to support the emerging ISA's. There are many
current applications that need additions to the language to
describe effective solutions to some problems. Ad-hoc additions
prevent the very thing that C is promoted for, that is
portability.
C is intended to support two significantly different types of code.
One is portable code that can run on a wide range of systems. The
other is system-specific code that is targeted at a very small number
of systems. If you are writing code that depends on features of a
particular ISA, then you should be using target-specific or
implementation-dependent features.
If a new feature is useful across a range of targets, then sometimes
a middle ground would make more sense. The C standards today have
that in the form of optional features. For example, most targets
support nice 8-bit, 16-bit, 32-bit and 64-bit integers with two's
complement arithmetic. But some targets do not support them. So C99
and C11 give standard names and definitions of these types, but make
them optional. This works well for features that many targets can
support, and many people would have use of.
For features that are useful on only a small number of ISA's, they
should not be in the main C standards at all - a supplementary
standard would make more sense. Yes, that would mean fragmenting the
C world somewhat - but I think it would still be a better
compromise.
At the time 18037 was written there was a consensus that C should have a
core set of common features and additional standards written to support
specific additional application areas. The working title for 18037 was
"C standards for Embedded Systems". Common core features turned out in
practice to be very difficult to agree on and it was essentially abandoned.

The standard names was the way tying more diverse users together. In
general have worked well to support the types of embedded work that I do
without staying too far from the C language.
Post by David Brown
Post by Walter Banks
So many of the development tools still are restricted by the
technology limits of development environments of 40 years ago.
It is the price of backwards compatibility. Like most C programmers,
I have my own ideas of what is "old cruft" that could be removed from
C standards without harm to current users. And like most C
programmers, my ideas about what is "old cruft" will include things
that some other C programmers still use to this day.
The argument is more about development tools than language. Our tools
for example support both compiling to objects and linking as well as
absolute code compiling to an executable. We have supported both for a
long time. Our customers are split over the approach they use for
application development.

We have always compiled directly to machine code in our tools also not a
language specific issue. Development platforms once had limited
resources that were overcome with linking and post assembly translation.
Those restrictions don't apply any more.

The effects of old code generation technology is even more manifest than
that. Linking has become a lot smarter on terms of code generation but
it is a lot more computationally expensive than running a compiler
strategy pass to analyze the data and control flow of and application.
This information can give a compiler an overall plan to create code for
the application this time.
Post by David Brown
Post by Walter Banks
The named address space has often been used to support diverse
forms of memory. To use your example x = 1; The choice where x is
located and how it is accessed is made where it is declared. How it
is handled after that is made by the compiler. The assumption is
that the code is written with functional intent.
Yes, that is the nice thing about named address spaces here.
Post by Walter Banks
As valid as the SPI conflict is it is a strawman in practice. C is
filled with undefined and ambiguous cases and this type of
potential problem in practice is very rare.
I don't agree.
I am /not/ saying the benefits are not worth the costs here - I am
saying this needs to be considered very, very carefully, and
features needed to be viewed in the context of the effects they can
cause here. There are no /right/ answers - but calling it "a strawman
in practice" is very much the /wrong/ answer. Problems that occur
very rarely are the worst kind of problems.
I essentially stand behind my comments. Problems of moving variable
access methods using named address space have had few problems in practice.
Post by David Brown
That is a very different kind of programming from the current
mainstream, and it is questionably as to whether C is a sensible
choice of language for such systems. But okay, let's continue...
Why, I have no real conflict with the historical C and generally have no
reason to want to impact old functionality. My approach is similar to
the K&R argument declaration changes. Add new syntax support both. 20
years later the marketplace will sort out which is used.
Post by David Brown
Post by Walter Banks
In the two decades since this work was initially done things have
changed considerably from consumer products that distributed an
application over 3 or 4 processors. (after initially prototyping on
a single processor). In these processor usage was almost always
manually allocation using geographical centers of reference.
This has evolved to compiler analysis that automate this whole
process over many many processors.
I assume you are not talking about multi-threaded code working on an
SMP system - that is already possible in C, especially with C11
features like threading, atomic access, and thread-local data. (Of
course more features might be useful, and just because it is possible
does not mean programmers get things right.)
You are talking about MPPA ("Multi purpose processor array") where
you have many small cores with local memory distributed around a
chip, with communication channels between the nodes.
That is a close enough description. C has been filled with ad-hoc
separate memory spaces. Thread local data you just mentioned, dsp
separate memory, historical real separate spaces for small embedded
systems, paging and protected memory. Don't discard these but formalize
there declaration and use. Do it in a way that can incorporate
functionally what has been done and don't do anything to impede the
continued use of what is there now.

In a similar way look at the current approach to multiprocessors
support. How different are threads to multiple execution units. Why
shouldn't multiprocessors be managed in similar ways that memory space
is currently managed and allocated now at least allowing for these to be
machine instead of manually optimized? Finally why shouldn't generic
approaches be formalized so the tools aren't restricting application
development?

By arguments for doing this in the C context are two. First the real
impact on the language is small, all are additions not changes and have
no impact on existing code bases. Second C is a living language and has
lasted as long as it has because standards for the language are there to
codify current practices.

w..
Alex Afti
2023-12-19 17:35:19 UTC
Permalink
Both Computer Engineering and Electronics and Communication Engineering can provide a strong foundation for working on embedded systems and IoT. However, the specific focus and coursework may vary between these programs, and the best choice depends on your interests and career goals.

Computer Engineering:
Focus: Computer engineering typically emphasizes the design and integration of computer systems. This includes hardware and software aspects, making it well-suited for working on embedded systems where both hardware and software play crucial roles.
Relevance to IoT: Computer engineering programs often cover topics such as microcontrollers, real-time operating systems, and hardware-software interfacing, which are directly applicable to IoT development.
Electronics and Communication Engineering:

Focus: This field is more inclined towards the design and development of electronic systems, communication systems, and signal processing. While it may not delve as deeply into software aspects as computer engineering, it provides a strong foundation in hardware design and communication technologies.
Relevance to IoT: Electronics and Communication Engineering can be beneficial for IoT, especially in the context of sensor design, communication protocols, and networking aspects of IoT systems.

Computer and Communication Engineering:
Focus: This interdisciplinary program combines aspects of computer engineering and communication engineering, offering a balanced approach to both fields.
Relevance to IoT: With a focus on both computer and communication aspects, this program could provide a well-rounded education for IoT, covering both the hardware and communication aspects of embedded systems.

Choosing the Right Program:
Consider the curriculum of each program at the specific university you are interested in. Look for courses that cover topics such as microcontrollers, embedded systems, communication protocols, and IoT applications. Additionally, consider any opportunities for hands-on projects or internships related to embedded systems and IoT.

If possible, reach out to current students or faculty members in each program to gain insights into the specific strengths and opportunities each program offers for pursuing a career in embedded systems and IoT.

Ultimately, both Computer Engineering and Electronics and Communication Engineering can lead to successful careers in IoT, so choose the program that aligns more closely with your interests and career aspirations. Answer Source https://www.treasuryprime.com/
Les Cargill
2017-08-02 01:46:55 UTC
Permalink
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff that's
hard to pick up on your own, i.e. math and the more mathematical
parts of engineering (especially signals & systems and
electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks, (which
has an impressively high signal-to-noise ratio), or watch the news
(which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since
I was a teenager. I picked up good habits from reading books and
other people's code.
From reading fora and such, I don't think people like to learn how to
program that much any more.
Post by Phil Hobbs
Security is another issue. I don't do IoT things myself (and try not
to buy them either), but since that's the OP's interest, I agree that
one should add security/cryptography to the list of subjects to learn
about at school.
WRT to programming, generally "safety" or "security" means "don't
expose UB in C programs". This becomes political, fast.

I dunno that crypto knowlege is of any use or not, beyond the "might
need it" level.
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
and with a good math background you can teach yourself anything
you need to know about.
Agreed.
Post by Phil Hobbs
Just learning MCUs and FPGAs is a recipe for becoming obsolete.
There's always a decision to be made as to whether to be a
generalist or a specialist. Both options are valid, and they have
complementary advantages and disadvantages.
Being a specialist is one thing, but getting wedded to one set of
tools and techniques is a problem.
Cheers
Phil Hobbs
--
Les Cargill
David Brown
2017-08-02 07:30:10 UTC
Permalink
Post by Les Cargill
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
Another thing is to concentrate the course work on stuff that's
hard to pick up on your own, i.e. math and the more mathematical
parts of engineering (especially signals & systems and
electrodynamics).
Agreed.
Post by Phil Hobbs
Programming you can learn out of books without much difficulty,
The evidence is that /isn't/ the case :( Read comp.risks, (which
has an impressively high signal-to-noise ratio), or watch the news
(which doesn't).
Dunno. Nobody taught me how to program, and I've been doing it since
I was a teenager. I picked up good habits from reading books and
other people's code.
You can certainly learn things that way - if the books and the code are
good enough. You also need an expert or two that you can talk to (or at
least, a good newsgroup!), and be able to research the details.
Otherwise you learn from one example that a loop of 10 in C is written
"for (i = 0; i < 10; i++)", and then a loop of 100 is "for (i = 0; i <
100; i++)". Then you see a web page with "for (i = 0; i < 100000; i++)"
but when you try that on your AVR, suddenly it does not work.

Most of the details of particular languages can be picked up from books
(or websites), but I think that some training is needed to be a good
programmer - you need to understand how to /think/ programming.
Mathematics and electronics engineering help too.
Post by Les Cargill
From reading fora and such, I don't think people like to learn how to
program that much any more.
Well, it is not uncommon in forums and newsgroups to get the people who
have ended up with a project that is well beyond their abilities, and/or
time frame, and they want to get things done without "wasting" time
learning. And of course there are the people who believe they know it
all already, and have great difficulty learning.
Post by Les Cargill
Post by Phil Hobbs
Security is another issue. I don't do IoT things myself (and try not
to buy them either), but since that's the OP's interest, I agree that
one should add security/cryptography to the list of subjects to learn
about at school.
WRT to programming, generally "safety" or "security" means "don't
expose UB in C programs". This becomes political, fast.
What do you mean by that? Undefined behaviour is just bugs in the code.
The concept of undefined behaviour in C is a good thing, and helps you
get more efficient code - but if your code relies on the results of
undefined behaviour it is wrong. In some cases, it might happen to work
- but it is still wrong.

To be safe and secure, a program should not have bugs (at least not ones
that affect safety or security!). That applies to all bugs - be it UB,
overflows, misunderstandings about the specifications, mistakes in the
specifications, incorrect algorithms, incorrect functions - whatever.
UB is not special in that way.

And what do you mean by "this becomes political" ?
Post by Les Cargill
I dunno that crypto knowlege is of any use or not, beyond the "might
need it" level.
A little crypto knowledge is good, as is lots - but a medium amount of
crypto knowledge can be a dangerous thing. Most programmers know that
they don't understand it, and will use third-party software or hardware
devices for cryptography. They need to know a little about it, to know
when and how to use it - but they don't need to know how it works.

At the other end, the industry clearly needs a certain number of people
who /do/ know how it all works, to implement it.

The big danger is the muppets in the middle who think "that 3DES routine
is so /slow/. I can write a better encryption function that is more
efficient".
Post by Les Cargill
Post by Phil Hobbs
Post by Tom Gardner
Post by Phil Hobbs
and with a good math background you can teach yourself anything
you need to know about.
Agreed.
Post by Phil Hobbs
Just learning MCUs and FPGAs is a recipe for becoming obsolete.
There's always a decision to be made as to whether to be a
generalist or a specialist. Both options are valid, and they have
complementary advantages and disadvantages.
Being a specialist is one thing, but getting wedded to one set of
tools and techniques is a problem.
Cheers
Phil Hobbs
Les Cargill
2017-08-05 20:11:30 UTC
Permalink
<snip>
Post by David Brown
Post by Les Cargill
From reading fora and such, I don't think people like to learn how
to program that much any more.
Well, it is not uncommon in forums and newsgroups to get the people
who have ended up with a project that is well beyond their abilities,
and/or time frame, and they want to get things done without "wasting"
time learning. And of course there are the people who believe they
know it all already, and have great difficulty learning.
I see a lot of people who really lean on higher-order constructs.
IMO, C++ vectors and arrays look remarkably similar, primarily
differing in lifespan. But do some people, they're wildly
different.

NULL pointers and NUL terminated string seem to be a problem for
many people. and perhaps just pointers of any sort.
Post by David Brown
Post by Les Cargill
Post by Phil Hobbs
Security is another issue. I don't do IoT things myself (and try
not to buy them either), but since that's the OP's interest, I
agree that one should add security/cryptography to the list of
subjects to learn about at school.
WRT to programming, generally "safety" or "security" means "don't
expose UB in C programs". This becomes political, fast.
What do you mean by that? Undefined behaviour is just bugs in the
code. The concept of undefined behaviour in C is a good thing, and
helps you get more efficient code - but if your code relies on the
results of undefined behaviour it is wrong. In some cases, it might
happen to work - but it is still wrong.
That's how I see it as well; others seem see the very existence of UB
as one click short of criminal.

Then again, perhaps what I am seeing is propaganda trying to create buzz
for the Rust language.
Post by David Brown
To be safe and secure, a program should not have bugs (at least not
ones that affect safety or security!). That applies to all bugs - be
it UB, overflows, misunderstandings about the specifications,
mistakes in the specifications, incorrect algorithms, incorrect
functions - whatever. UB is not special in that way.
And what do you mean by "this becomes political" ?
By that I mean the tone of communication on the subject
becomes shrill and in cases, somewhat hysterical. If this
is mainly propaganda then that would also explain it.

Lets just say that my confidence that anyone can learn C has
been shaken this year.
Post by David Brown
Post by Les Cargill
I dunno that crypto knowlege is of any use or not, beyond the
"might need it" level.
A little crypto knowledge is good, as is lots - but a medium amount
of crypto knowledge can be a dangerous thing. Most programmers know
that they don't understand it, and will use third-party software or
hardware devices for cryptography. They need to know a little about
it, to know when and how to use it - but they don't need to know how
it works.
Right. It's like anything complex - we have specialists for
that.
Post by David Brown
At the other end, the industry clearly needs a certain number of
people who /do/ know how it all works, to implement it.
The big danger is the muppets in the middle who think "that 3DES
routine is so /slow/. I can write a better encryption function that
is more efficient".
Oh good grief. :)
<snip>
--
Les Cargill
Paul Rubin
2017-08-08 07:22:23 UTC
Permalink
all bugs - be it UB, overflows, misunderstandings about the
specifications, mistakes in the specifications, incorrect algorithms,
incorrect functions - whatever. UB is not special in that way.
Yes UB is special. All those non-UB bugs you mention will have a
defined behaviour that just isn't the behaviour that you wanted. UB, as
the name implies, has no defined behaviour at all: anything can happen,
including the proverbial nasal demons.
And what do you mean by "this becomes political" ?
I can't speak for Les, but guaranteeing C programs to be free of UB is
so difficult that one can debate whether writing complex critical
programs in C is morally irresponsible. That type of debate tends to
take on a political flavor like PC vs Mac, Emacs vs Vi, and other
similar burning issues.
Tom Gardner
2017-08-08 07:36:25 UTC
Permalink
Post by Paul Rubin
I can't speak for Les, but guaranteeing C programs to be free of UB is
so difficult that one can debate whether writing complex critical
programs in C is morally irresponsible. That type of debate tends to
take on a political flavor like PC vs Mac, Emacs vs Vi, and other
similar burning issues.
Yes, in all respects.

And more people /think/ they can avoid UB than
can actually achieve that nirvana. That's
dangerous Dunning-Krueger territory.
David Brown
2017-08-08 10:37:41 UTC
Permalink
Post by Paul Rubin
all bugs - be it UB, overflows, misunderstandings about the
specifications, mistakes in the specifications, incorrect algorithms,
incorrect functions - whatever. UB is not special in that way.
Yes UB is special. All those non-UB bugs you mention will have a
defined behaviour that just isn't the behaviour that you wanted. UB, as
the name implies, has no defined behaviour at all: anything can happen,
including the proverbial nasal demons.
Bugs are problems, no matter whether they have defined behaviour or
undefined behaviour. But it is sometimes possible to limit the damage
caused by a bug, and it can certainly be possible to make it easier or
harder to detect.

The real question is, would it help to give a definition to typical C
"undefined behaviour" like signed integer overflows or access outside of
array bounds?

Let's take the first case - signed integer overflows. If you want to
give a defined behaviour, you pick one of several mechanisms. You could
use two's complement wraparound. You could use saturated arithmetic.
You could use "trap representations" - like NaN in floating point. You
could have an exception mechanism like C++. You could have an error
handler mechanism. You could have a software interrupt or trap.

Giving a defined "ordinary" behaviour like wrapping would be simple and
appear efficient. However, it would mean that the compiler would be
unable to spot problems at compile time (the best time to spot bugs!),
and it would stop the compiler from a number of optimisations that let
the programmer write simple, clear code while relying on the compiler to
generate good results.

Any kind of trap or error handler would necessitate a good deal of extra
run-time costs, and negate even more optimisations. The compiler could
not even simplify "x + y - y" to "x" because "x + y" might overflow.

It is usually a simple matter for a programmer to avoid signed integer
overflow. Common methods include switching to unsigned integers, or
simply increasing the size of the integer types.

Debugging tools can help spot problems, such as the "sanitizers" in gcc
and clang, but these are of limited use in embedded systems.


Array bound checking would also involve a good deal of run-time
overhead, as well as re-writing of C code (since you would need to track
bounds as well as pointers). And what do you do when you have found an
error?


C is like a chainsaw. It is very powerful, and lets you do a lot of
work quickly - but it is also dangerous if you don't know what you are
doing. Remember, however, that no matter how safe and idiot-proof your
tree-cutting equipment is, you are still at risk from the falling tree.
Post by Paul Rubin
And what do you mean by "this becomes political" ?
I can't speak for Les, but guaranteeing C programs to be free of UB is
so difficult that one can debate whether writing complex critical
programs in C is morally irresponsible. That type of debate tends to
take on a political flavor like PC vs Mac, Emacs vs Vi, and other
similar burning issues.
I would certainly agree that a good deal of code that is written in C,
should have been written in other languages. It is not the right tool
for every job. But it /is/ the right tool for many jobs - and UB is
part of what makes it the right tool. However, you need to understand
what UB is, how to avoid it, and how the concept can be an advantage.
tim...
2017-07-27 16:46:56 UTC
Permalink
I am applying for university right now and I am wondering which
"Computer engineering" vs "electronics and communication engineering" also
a specific university offers "computer and communication engineering" I
know that having any of those I can get into IoT but which would be better
for the field?
I don't think that you can see IoT as a branch of the industry that requires
anything special at entry level

A junior engineering role on an embedded project is probably not going to be
expected to deal with any of the security issues (hell there are a lot of
companies investigating adding IoT functionality to their products that
don't have principle engineers working on that), so it just looks like any
other embedded project at that level of experience

you just have to target your job hunt to the relevant companies at
graduation time

so IME any EE or engineering biased CS degree will do

tim
Grant Edwards
2017-07-27 16:54:01 UTC
Permalink
Post by tim...
I am applying for university right now and I am wondering which
"Computer engineering" vs "electronics and communication engineering" also
a specific university offers "computer and communication engineering" I
know that having any of those I can get into IoT but which would be better
for the field?
I don't think that you can see IoT as a branch of the industry that requires
anything special at entry level
A junior engineering role on an embedded project is probably not going to be
expected to deal with any of the security issues
AFAICT, nobody at any level in IoT is expected to deal with any of the
security issues. Or deal with making products do something useful,
for that matter.
--
Grant Edwards grant.b.edwards Yow! Of course, you
at UNDERSTAND about the PLAIDS
gmail.com in the SPIN CYCLE --
tim...
2017-07-27 18:57:15 UTC
Permalink
Post by Grant Edwards
Post by tim...
I am applying for university right now and I am wondering which
"Computer engineering" vs "electronics and communication engineering" also
a specific university offers "computer and communication engineering" I
know that having any of those I can get into IoT but which would be better
for the field?
I don't think that you can see IoT as a branch of the industry that requires
anything special at entry level
A junior engineering role on an embedded project is probably not going to be
expected to deal with any of the security issues
AFAICT, nobody at any level in IoT is expected to deal with any of the
security issues.
I thought that I said that
Post by Grant Edwards
Or deal with making products do something useful,
for that matter.
harsh

the early IoT proposals based upon mesh systems seem to have created some
useful products, street light management for example

tim
Frank Miles
2017-07-27 16:58:28 UTC
Permalink
I am applying for university right now and I am wondering which
"Computer engineering" vs "electronics and communication engineering" also
a specific university offers "computer and communication engineering" I
know that having any of those I can get into IoT but which would be better
for the field?
---------------------------------------
Posted through http://www.EmbeddedRelated.com
I bet that these programs have much overlap. You should look at the
details of what courses are standard and what are electives, and see
what appeals to you.

This may be antithetical to some, but I think time at a University
should mostly be on the "theoretical" side. Primarily it's because
picking up that stuff on your own, later, is relatively hard to do.
It's also more likely to have lasting value, at least in comparison
to learning the language or platform de jour.

By all means plan on doing more "practical" work on your own, during
your educational time. These days there are many avenues for that.

Worst case - you make a choice that later seems wrong - you should
be able to transfer at fairly low time/expense cost.

Best wishes!
Theo Markettos
2017-07-27 23:43:57 UTC
Permalink
Post by Frank Miles
I bet that these programs have much overlap. You should look at the
details of what courses are standard and what are electives, and see
what appeals to you.
It's probably worth finding out what the routes are: if you decide to do one
programme, are you stuck with that or can you take courses that lead in a
different direction? Many people find their strengths are in different
places than they expected.
Post by Frank Miles
This may be antithetical to some, but I think time at a University
should mostly be on the "theoretical" side. Primarily it's because
picking up that stuff on your own, later, is relatively hard to do.
It's also more likely to have lasting value, at least in comparison
to learning the language or platform de jour.
By all means plan on doing more "practical" work on your own, during
your educational time. These days there are many avenues for that.
I'd agree with that - something like 'IoT' is likely to be very different in
4-5 years time when you finish, in terms of the tools and popular platforms.
So it's better to have a grounding and then keep up with the platform du
jour as the icing on top.

The other aspect is good engineering practices: writing clean code, good
documentation, using tools like version control appropriately, etc. I'd
suggest that's a skill that isn't well taught in big groups (one instructor,
500 students). It's better to do it either on the job (eg internships) or
other environments where you might receive mentoring, eg open source
projects. Similiarly for practical skills like soldering, assembly, etc -
to some degree you can pick those up from YouTube, or else you need someone
sitting next to you telling you what you did wrong.
Post by Frank Miles
Worst case - you make a choice that later seems wrong - you should
be able to transfer at fairly low time/expense cost.
Also don't be afraid to look over the wall at other disciplines -
occasionally having a CS/biology or EE/psychology or whatever crossover can
come in very handy. Or closer to home EE/CS, EE/mechE, EE/power, EE/physics
or similar combinations.

Theo
j***@ieee.org
2017-07-30 22:26:00 UTC
Permalink
Post by Frank Miles
I am applying for university right now and I am wondering which
"Computer engineering" vs "electronics and communication engineering" also
a specific university offers "computer and communication engineering" I
know that having any of those I can get into IoT but which would be better
for the field?
---------------------------------------
Posted through http://www.EmbeddedRelated.com
I bet that these programs have much overlap. You should look at the
details of what courses are standard and what are electives, and see
what appeals to you.
This may be antithetical to some, but I think time at a University
should mostly be on the "theoretical" side. Primarily it's because
picking up that stuff on your own, later, is relatively hard to do.
It's also more likely to have lasting value, at least in comparison
to learning the language or platform de jour.
By all means plan on doing more "practical" work on your own, during
your educational time. These days there are many avenues for that.
Worst case - you make a choice that later seems wrong - you should
be able to transfer at fairly low time/expense cost.
Best wishes!
Once you get an EE job, the second part of your education starts:
In my case learning all the chips and parts for circuit design (which is steered in the direction of what you anticipate you will need for your employer's work).
The manufacturers provide application notes that very good at reinforcing and extending your college knowledge base.
Chris
2017-07-27 23:45:31 UTC
Permalink
I am applying for university right now and I am wondering which
"Computer engineering" vs "electronics and communication engineering" also
a specific university offers "computer and communication engineering" I
know that having any of those I can get into IoT but which would be better
for the field?
I would choose the electronics degree first, as more likely to keep you
in work rather than computer science, which for embedded work is a
subset that depends on electronics. It will also stretch you more in
math terms than comp sci alone.

Buy the books on comp sci as well, particularly os theory, algorithms
and data structures. Learn that in your spare time and find the books
s/hand on ABE books or Amazon.

Good luck, a worthwhile career and plenty of scope for innovative design
and creativity...

Chris
Hans-Bernhard Bröker
2017-08-03 17:50:53 UTC
Permalink
I am applying for university right now and I am wondering which
"Computer engineering" vs "electronics and communication engineering" also
a specific university offers "computer and communication engineering" I
know that having any of those I can get into IoT but which would be better
for the field?
Odds are this "field" will either have vanished completely (and maybe
deservedly), or have changed beyond recognition in the time from now to
when you finish your degree. Betting several years of your life (and
depending on your country's style of doing things, up to tens of
thousands of dollars on top) on that kind of hunch is rarely advisable.

This is an easy mistake to make, and there are gazillions of freshmen
who make it every year. It causes the same "pork cycles" of bubbles and
crashes in the education and job markets as are observed in the general
economy, and for much the same reason, too.

One of the worst examples in recent history was in 2001, when the very
public "dot-com" bubble drove millions of youngsters worldwide to the
belief that they absolutely needed to study computer science _now_, to
get on the ball early. So for a year or two there were upward of 4
times as many freshmen in CS courses as usual, the vast majority of
which were clearly in entirely the wrong place. And it showed.
Failures and drop-out rates shot through the roof, and those relatively
few "extra" graduates who actually made it onto the job market did so
years _after_ the bubble had burst, explosively. Overall, the whole
episode was just a colossal waste of hopes, life-time, money and other
things.

So my advice is: do your best to forget about any and all current trends
and hypes in the economy when you make decisions about your university
studies. At best, they're a pointless distraction; at worst they'll
mislead you into a field of work you hate for the rest of your life,
where you'll be pitted against naturals who like doing it, and are
generally better at it, too.

The silly number of supposedly different degrees offered in many
countries these days don't help, either. Nowadays, wherever there's a
particular combination of testable skills that some university believes
will be useful to more than 40 people in the world, total, they'll feel
obliged to invent a name for that precise combination of skills and set
up a course programme to crank out bachelors of it. Of course, the
Universities' predictions about future needs of the job market aren't
really that much more reliable than anyone's. And so the pork cycle
rolls on.

And they don't even start to think about how anybody is supposed to make
an informed decision between such ultra-specialized programmes. I'm
convinced it's impossible.
Les Cargill
2017-08-05 20:20:40 UTC
Permalink
<snip>
And so the pork cycle rolls on.
That's a great way to put it.
And they don't even start to think about how anybody is supposed to
make an informed decision between such ultra-specialized programmes.
I'm convinced it's impossible.
IMO, a reputable EE programme is still probably the best way. CS
programs still vary too much; CS may or may not be a second-class
setup in many universities.

I get the feeling that *analog* engineers still have a stable job
base because it's much harder to fake that. It's somewhat harder.

And I'd warn the OP against specifically targeting IoT. It's a big
bubble. People win in bubbles but it's not likely you will be among
them.

Just be aware that people are uniformly terrible at hiring in tech,
so networking is key.
--
Les Cargill
u***@downunder.com
2017-08-06 06:17:19 UTC
Permalink
On Sat, 5 Aug 2017 15:20:40 -0500, Les Cargill
Post by Les Cargill
<snip>
And so the pork cycle rolls on.
That's a great way to put it.
And they don't even start to think about how anybody is supposed to
make an informed decision between such ultra-specialized programmes.
I'm convinced it's impossible.
IMO, a reputable EE programme is still probably the best way. CS
programs still vary too much; CS may or may not be a second-class
setup in many universities.
I get the feeling that *analog* engineers still have a stable job
base because it's much harder to fake that. It's somewhat harder.
And I'd warn the OP against specifically targeting IoT. It's a big
bubble. People win in bubbles but it's not likely you will be among
them.
I have often wondered what this IoT hype is all about. It seems to be
very similar to the PLC (Programmable Logic Controller) used for
decades. You need to do some programming but as equally important
interface to he external world (sensors, relay controls and
communication to other devices).

These days, the programmable devices are just smaller, _much_ cheaper
and have much better performance than a PLC one or two decades ago.

Take a look at universities having industrial automation courses and
check what topics are included relevant to PLCs. Select these subjects
at your local university. You might not need process control theory
for simple IoT :-)

Analog electronics is important e.g. for interfacing exotic sensors or
controlling equally odd devices as well as protecting I/O against
overvoltage and ground potential issues. Understanding about line
voltage issues and line wiring can be a question of life and death.
Post by Les Cargill
Just be aware that people are uniformly terrible at hiring in tech,
so networking is key.
These days much jobs are outsourced to cheaper countries, so you might
concentrate on skills that are harder to outsource.
tim...
2017-08-06 09:12:36 UTC
Permalink
Post by u***@downunder.com
On Sat, 5 Aug 2017 15:20:40 -0500, Les Cargill
Post by Les Cargill
<snip>
And so the pork cycle rolls on.
That's a great way to put it.
And they don't even start to think about how anybody is supposed to
make an informed decision between such ultra-specialized programmes.
I'm convinced it's impossible.
IMO, a reputable EE programme is still probably the best way. CS
programs still vary too much; CS may or may not be a second-class
setup in many universities.
I get the feeling that *analog* engineers still have a stable job
base because it's much harder to fake that. It's somewhat harder.
And I'd warn the OP against specifically targeting IoT. It's a big
bubble. People win in bubbles but it's not likely you will be among
them.
I have often wondered what this IoT hype is all about. It seems to be
very similar to the PLC (Programmable Logic Controller) used for
decades.
don't think so

the IoT hype is all about marketing benefits - selling consumers extra
features (that they never knew they ever wanted and probably don't need)

using PLC's is an engineering benefit (or not)

tim
u***@downunder.com
2017-08-06 10:19:53 UTC
Permalink
Post by tim...
Post by u***@downunder.com
On Sat, 5 Aug 2017 15:20:40 -0500, Les Cargill
Post by Les Cargill
<snip>
And so the pork cycle rolls on.
That's a great way to put it.
And they don't even start to think about how anybody is supposed to
make an informed decision between such ultra-specialized programmes.
I'm convinced it's impossible.
IMO, a reputable EE programme is still probably the best way. CS
programs still vary too much; CS may or may not be a second-class
setup in many universities.
I get the feeling that *analog* engineers still have a stable job
base because it's much harder to fake that. It's somewhat harder.
And I'd warn the OP against specifically targeting IoT. It's a big
bubble. People win in bubbles but it's not likely you will be among
them.
I have often wondered what this IoT hype is all about. It seems to be
very similar to the PLC (Programmable Logic Controller) used for
decades.
don't think so
the IoT hype is all about marketing benefits - selling consumers extra
features (that they never knew they ever wanted and probably don't need)
Yes, this seems to be the main motivation.
Post by tim...
using PLC's is an engineering benefit (or not)
The greatly reduced hardware cost (both processing power and
Ethernet/WLAN communication) has made it possible to just handle a
single signal (or a small set of related I/O signals) in a dedicated
hardware for each signal. Thus the controlling "IoT" device could read
a measurement and control an actuator in a closed loop and receive a
setpoint from the network.

This means that the controlling device can be moved much closer to the
actuator, simplifying interfacing (not too much worrying about
interference). Taking this even further, this allows integrating the
controller into the actual device itself such as a hydraulic valve
(mechatronics). Just provide power and an Ethernet condition and off
you go. Of course, the environment requirements for such integrated
products can be quite harsh.

Anyway, with most of the intelligence moved down to the actual device
reduces the need for PLC systems, so some PC based control room
programs can directly control those intelligent mechatronics units.

Anyway, if the "IoT" device is moved inside the actual actuator etc.
device, similar skills are needed to interface to the input sensor
signals as well as controlling actuators as in the case of external
IoT controllers. With everything integrated into the same case, some
knowledge of thermal design will also help.

While some courses in computer science is useful, IMHO, spending too
much time on CS might not be that productive.
Les Cargill
2017-08-06 14:59:42 UTC
Permalink
Post by tim...
Post by u***@downunder.com
On Sat, 5 Aug 2017 15:20:40 -0500, Les Cargill
Post by Les Cargill
<snip>
And so the pork cycle rolls on.
That's a great way to put it.
And they don't even start to think about how anybody is
supposed to make an informed decision between such
ultra-specialized programmes. I'm convinced it's impossible.
IMO, a reputable EE programme is still probably the best way. CS
programs still vary too much; CS may or may not be a
second-class setup in many universities.
I get the feeling that *analog* engineers still have a stable
job base because it's much harder to fake that. It's somewhat
harder.
And I'd warn the OP against specifically targeting IoT. It's a
big bubble. People win in bubbles but it's not likely you will be
among them.
I have often wondered what this IoT hype is all about. It seems to
be very similar to the PLC (Programmable Logic Controller) used
for decades.
don't think so
the IoT hype is all about marketing benefits - selling consumers
extra features (that they never knew they ever wanted and probably
don't need)
The IoT hype that relates to people trying to get funding for things
like Internet enabled juicers might be more frothy than the potential
for replacing PLCs with hardware and software that comes from the
IoT/Maker space.
Post by tim...
using PLC's is an engineering benefit (or not)
It's not difficult to get beyond the capability of many PLCs. The
highly capable ones ( like NI) tend to be "hangar queens" - they're not
mechanically rugged.
Post by tim...
tim
--
Les Cargill
tim...
2017-08-06 17:06:30 UTC
Permalink
Post by Les Cargill
Post by tim...
Post by u***@downunder.com
On Sat, 5 Aug 2017 15:20:40 -0500, Les Cargill
Post by Les Cargill
<snip>
And so the pork cycle rolls on.
That's a great way to put it.
And they don't even start to think about how anybody is
supposed to make an informed decision between such
ultra-specialized programmes. I'm convinced it's impossible.
IMO, a reputable EE programme is still probably the best way. CS
programs still vary too much; CS may or may not be a
second-class setup in many universities.
I get the feeling that *analog* engineers still have a stable
job base because it's much harder to fake that. It's somewhat
harder.
And I'd warn the OP against specifically targeting IoT. It's a
big bubble. People win in bubbles but it's not likely you will be
among them.
I have often wondered what this IoT hype is all about. It seems to
be very similar to the PLC (Programmable Logic Controller) used
for decades.
don't think so
the IoT hype is all about marketing benefits - selling consumers
extra features (that they never knew they ever wanted and probably
don't need)
The IoT hype that relates to people trying to get funding for things
like Internet enabled juicers might be more frothy
I have just received a questionnaire from the manufactures of my PVR asking
about what upgraded features I would like it to include.

Whilst they didn't ask it openly, reading between the lines there were
asking:

"would you like to control your home heating (and several other things) via
your Smart TV (box)"

To which I answered, of course I bloody well don't

Even if I did seen a benefit in having an internet connected heating
controller, why would I want to control it from my sofa using anything other
than the remote control that comes with it, in the box?

tim
rickman
2017-08-07 01:31:57 UTC
Permalink
Post by tim...
I have just received a questionnaire from the manufactures of my PVR asking
about what upgraded features I would like it to include.
"would you like to control your home heating (and several other things) via
your Smart TV (box)"
To which I answered, of course I bloody well don't
Even if I did seen a benefit in having an internet connected heating
controller, why would I want to control it from my sofa using anything other
than the remote control that comes with it, in the box?
None of this makes sense to me because I have no idea what a PVR is.
--
Rick C
tim...
2017-08-07 08:32:44 UTC
Permalink
Post by rickman
Post by tim...
I have just received a questionnaire from the manufactures of my PVR asking
about what upgraded features I would like it to include.
"would you like to control your home heating (and several other things) via
your Smart TV (box)"
To which I answered, of course I bloody well don't
Even if I did seen a benefit in having an internet connected heating
controller, why would I want to control it from my sofa using anything other
than the remote control that comes with it, in the box?
None of this makes sense to me because I have no idea what a PVR is.
A Personal Video Recorded (a disk based video recorder)
Post by rickman
--
Rick C
Les Cargill
2017-08-06 14:53:55 UTC
Permalink
Post by u***@downunder.com
On Sat, 5 Aug 2017 15:20:40 -0500, Les Cargill
Post by Les Cargill
<snip>
And so the pork cycle rolls on.
That's a great way to put it.
And they don't even start to think about how anybody is supposed to
make an informed decision between such ultra-specialized programmes.
I'm convinced it's impossible.
IMO, a reputable EE programme is still probably the best way. CS
programs still vary too much; CS may or may not be a second-class
setup in many universities.
I get the feeling that *analog* engineers still have a stable job
base because it's much harder to fake that. It's somewhat harder.
And I'd warn the OP against specifically targeting IoT. It's a big
bubble. People win in bubbles but it's not likely you will be among
them.
I have often wondered what this IoT hype is all about. It seems to be
very similar to the PLC (Programmable Logic Controller) used for
decades.
Similar. But PLCs are more pointed more at ladder logic for use in
industrial settings. You generally cannot, for example, write a socket
server that just does stuff on a PLC; you have to stay inside a dev
framework that cushions it for you.

There is a great deal of vendor lockin and the tool suites are rather
creaky. And it's all very costly.
Post by u***@downunder.com
You need to do some programming but as equally important
interface to he external world (sensors, relay controls and
communication to other devices).
Yep.
Post by u***@downunder.com
These days, the programmable devices are just smaller, _much_ cheaper
and have much better performance than a PLC one or two decades ago.
Very much so. While doing paper-engineering - as in PE work - for power
distro has some learning curve, the basics of power distro aren't rocket
surgery.
Post by u***@downunder.com
Take a look at universities having industrial automation courses and
check what topics are included relevant to PLCs. Select these subjects
at your local university. You might not need process control theory
for simple IoT :-)
You might end up building a flaky hunk of garbage if you don't...
Post by u***@downunder.com
Analog electronics is important e.g. for interfacing exotic sensors or
controlling equally odd devices as well as protecting I/O against
overvoltage and ground potential issues. Understanding about line
voltage issues and line wiring can be a question of life and death.
Absolutely.
Post by u***@downunder.com
Post by Les Cargill
Just be aware that people are uniformly terrible at hiring in tech,
so networking is key.
These days much jobs are outsourced to cheaper countries, so you might
concentrate on skills that are harder to outsource.
--
Les Cargill
u***@downunder.com
2017-08-07 07:59:19 UTC
Permalink
On Sun, 6 Aug 2017 09:53:55 -0500, Les Cargill
Post by Les Cargill
Post by u***@downunder.com
I have often wondered what this IoT hype is all about. It seems to be
very similar to the PLC (Programmable Logic Controller) used for
decades.
Similar. But PLCs are more pointed more at ladder logic for use in
industrial settings. You generally cannot, for example, write a socket
server that just does stuff on a PLC; you have to stay inside a dev
framework that cushions it for you.
In IEC-1131 (now IEC 61131-3) you can enter the program in the format
you are mostly familiar with, such as ladder logic or structured text
(ST), which is similar to Modula (and somewhat resembles Pascal) with
normal control structures.

IEC-1131 has ben available for two decades
Les Cargill
2017-08-08 01:09:23 UTC
Permalink
Post by u***@downunder.com
On Sun, 6 Aug 2017 09:53:55 -0500, Les Cargill
Post by Les Cargill
Post by u***@downunder.com
I have often wondered what this IoT hype is all about. It seems to be
very similar to the PLC (Programmable Logic Controller) used for
decades.
Similar. But PLCs are more pointed more at ladder logic for use in
industrial settings. You generally cannot, for example, write a socket
server that just does stuff on a PLC; you have to stay inside a dev
framework that cushions it for you.
In IEC-1131 (now IEC 61131-3) you can enter the program in the format
you are mostly familiar with, such as ladder logic or structured text
(ST), which is similar to Modula (and somewhat resembles Pascal) with
normal control structures.
It may resemble Pascal, but it's still limited in what it can do. It's
good enough for ... 90% of things that will need to be done, but I live
outside that 90% myself.
Post by u***@downunder.com
IEC-1131 has ben available for two decades
--
Les Cargill
u***@downunder.com
2017-08-08 18:31:18 UTC
Permalink
On Mon, 7 Aug 2017 20:09:23 -0500, Les Cargill
Post by Les Cargill
Post by u***@downunder.com
On Sun, 6 Aug 2017 09:53:55 -0500, Les Cargill
Post by Les Cargill
Post by u***@downunder.com
I have often wondered what this IoT hype is all about. It seems to be
very similar to the PLC (Programmable Logic Controller) used for
decades.
Similar. But PLCs are more pointed more at ladder logic for use in
industrial settings. You generally cannot, for example, write a socket
server that just does stuff on a PLC; you have to stay inside a dev
framework that cushions it for you.
In IEC-1131 (now IEC 61131-3) you can enter the program in the format
you are mostly familiar with, such as ladder logic or structured text
(ST), which is similar to Modula (and somewhat resembles Pascal) with
normal control structures.
It may resemble Pascal, but it's still limited in what it can do. It's
good enough for ... 90% of things that will need to be done, but I live
outside that 90% myself.
At least in the CoDeSys implementation of IEC 1131 it is easy to write
some low level functions e.g. in C, such as setting up hardware
registers, doing ISRs etc. Just publish suitable "hooks" that can be
used by the ST code, which then can be accessed by function blocks or
ladder logic.

In large projects, different people can do various abstraction layers.

When these hooks (written in C etc.) are well defined, people familiar
with ST or other IEC 1131 forms can do their own applications. I wrote
some hooks in C at the turn of the century and I have not needed to
touch it since, all the new operations could be implemented by other
persons, more familiar with IEC 1131.
Post by Les Cargill
Post by u***@downunder.com
IEC-1131 has ben available for two decades
Stef
2017-08-21 09:28:50 UTC
Permalink
Post by u***@downunder.com
On Sat, 5 Aug 2017 15:20:40 -0500, Les Cargill
Post by Les Cargill
IMO, a reputable EE programme is still probably the best way. CS
programs still vary too much; CS may or may not be a second-class
setup in many universities.
I get the feeling that *analog* engineers still have a stable job
base because it's much harder to fake that. It's somewhat harder.
Yes a good understanding of analog (and digital) electronics is IMO
still the best starting point if you plan to build and program
"lower level" devices, like the "IoT" devices.
Post by u***@downunder.com
Post by Les Cargill
And I'd warn the OP against specifically targeting IoT. It's a big
bubble. People win in bubbles but it's not likely you will be among
them.
I have often wondered what this IoT hype is all about. It seems to be
very similar to the PLC (Programmable Logic Controller) used for
decades. You need to do some programming but as equally important
interface to he external world (sensors, relay controls and
communication to other devices).
"IoT" mostly seems a new buzz word for things that have been done for
decades, but then with improved (fancier) user interface.

Saw an article on new IoT rat traps lately: "Remotely monitor the trap,
warns if activated or battery low etc. Uses SMS to communicate with
server". Now, that just sounds like what we did 20 years ago. But then
we called it M2M communication and it did not have such a pretty web
interface and we did not have to hand over all our data to Google or
some other party. And there was no 'cloud', just a server.

And ofcourse there are sensible IoT devices and services, but too many
things are just labeled "IoT" for the label value alone.

And what about this "new" thing: "Edge Computing"
Something "new": Process the data locally (on the embedded device) before
you send it to the server.

Again something that has been done for decades (someone in this thread
called it the "pork cycle"?) because we needed to. The slow serial
connections just couldn't handle the raw, unprocessed data and servers
could not handle data processing for many devices simultanously.

Just sending everything over to the server was only made possible by
fast intervet connections. But they now find out that with so many
devices evrything is drowned in a data swamp. So bright new idea:
Process locally and invent new buzz word.

Hmm, I think I'm starting to sound old. ;-(
--
Stef (remove caps, dashes and .invalid from e-mail address to reply by mail)

Death is nature's way of saying `Howdy'.
Loading...