Discussion:
AM623 experiences
(too old to reply)
Don Y
2024-11-23 08:15:51 UTC
Permalink
I'm looking to move my design onto said platform.

Any first-hand experiences to share?

Bugs in silicon, toolchain, support, etc.?
David Brown
2024-11-23 12:31:28 UTC
Permalink
Post by Don Y
I'm looking to move my design onto said platform.
Any first-hand experiences to share?
Bugs in silicon, toolchain, support, etc.?
I've no experience with that device at all, but from experience with
other toolchains provided by TI over the years, watch out for zero
initialisation of variables in the bss. TI have had this crazy idea
that zero initialisation of program lifetime data is a waste of time at
startup, so many of their toolchains don't do so unless you specifically
add extra flags or extra code for it. This applied even to gcc-based
toolchains. (You can achieve this non-standard behaviour by special
linker scripts or a broken startup library.)

Maybe they have stopped this lunacy, but I'd double-check this when
using any toolchain supplied by TI for the M4 core. (The Linux cores -
assuming you are using Linux on the A53's - will of course be standard.)



Is there any particular reason for picking this device? AFAIK the NXP
i.mx families are a lot more common for embedded Linux SoC's for
industrial equipment.
Grant Edwards
2024-11-23 19:57:50 UTC
Permalink
Post by David Brown
Post by Don Y
I'm looking to move my design onto said platform.
Any first-hand experiences to share?
Bugs in silicon, toolchain, support, etc.?
I've no experience with that device at all, but from experience with
other toolchains provided by TI over the years, watch out for zero
initialisation of variables in the bss.
For the cortex M4, I'd avoid using TI's toolchain if at all possible.
Download a copy of GCC from ARM.

The general rule I've found to be true for the past 40 years is that
software/tools supplied by silicon vendors is crap.

--
Grant
Don Y
2024-11-23 23:56:45 UTC
Permalink
Post by Grant Edwards
Post by David Brown
Post by Don Y
I'm looking to move my design onto said platform.
Any first-hand experiences to share?
Bugs in silicon, toolchain, support, etc.?
I've no experience with that device at all, but from experience with
other toolchains provided by TI over the years, watch out for zero
initialisation of variables in the bss.
[Hmmm... I don't see your post, David. <frown> Something must be
hosed in my server. (actually, I don't see MANY posts, here! Quiet?)]

BSS is a good warning. I've been using other tools with the ARM that
I've "abandoned" ("moved past" might be a kinder way of reference).

I tend to rely on vendors tools *if* there is likely to be some
bug that the tools can workaround -- that a third-party vendor
may not be aware of (or address).
Post by Grant Edwards
For the cortex M4, I'd avoid using TI's toolchain if at all possible.
Download a copy of GCC from ARM.
This is an A53 (ARMv8) -- at least the "main cores" are. There's
also an M4(F) -- and, as typical of ARM, a couple of other
"specialty processors".
Post by Grant Edwards
The general rule I've found to be true for the past 40 years is that
software/tools supplied by silicon vendors is crap.
That's a direct analog of the issues with "sample applications" for
hardware devices!

What about "support"? Are there anything other than trained monkeys
available? Or, is everything "forum based" (what a great scam!
outsource your support to your CUSTOMERS!!)
David Brown
2024-11-24 10:42:43 UTC
Permalink
Post by Grant Edwards
Post by David Brown
Post by Don Y
I'm looking to move my design onto said platform.
Any first-hand experiences to share?
Bugs in silicon, toolchain, support, etc.?
I've no experience with that device at all, but from experience with
other toolchains provided by TI over the years, watch out for zero
initialisation of variables in the bss.
[Hmmm... I don't see your post, David.  <frown>  Something must be
hosed in my server.  (actually, I don't see MANY posts, here!  Quiet?)]
I don't think there is anything odd with my posting or the server on my
side (eternal-september). But Grant quoted the most important part of
my post - the rest was more venting than informative!

It /is/ quiet in this newsgroup.
BSS is a good warning.  I've been using other tools with the ARM that
I've "abandoned" ("moved past" might be a kinder way of reference).
I tend to rely on vendors tools *if* there is likely to be some
bug that the tools can workaround -- that a third-party vendor
may not be aware of (or address).
ARM is pretty good at making sure they cover all known bugs - so their
gcc toolchain builds tend to have backported patches that are not in the
mainstream gcc source tree until a version or two later. Mistakes can
happen, of course, and you can look at the issue trackers for their
toolchain builds to see some of them. But for the kind of cores in
question here, the microcontroller vendors have little influence over
the cores themselves, and therefore less scope of introducing
vendor-specific bugs.
Post by Grant Edwards
For the cortex M4, I'd avoid using TI's toolchain if at all possible.
Download a copy of GCC from ARM.
This is an A53 (ARMv8) -- at least the "main cores" are.  There's
also an M4(F) -- and, as typical of ARM, a couple of other
"specialty processors".
Post by Grant Edwards
The general rule I've found to be true for the past 40 years is that
software/tools supplied by silicon vendors is crap.
That's a direct analog of the issues with "sample applications" for
hardware devices!
What about "support"?  Are there anything other than trained monkeys
available?  Or, is everything "forum based" (what a great scam!
outsource your support to your CUSTOMERS!!)
It is not uncommon for vendor employees to take part in these forums
too. It is not actually a bad idea, because it means that when they
answer a customer's question, the answer is available for anyone else
doing a search.
David Brown
2024-11-24 10:33:37 UTC
Permalink
Post by Grant Edwards
Post by David Brown
Post by Don Y
I'm looking to move my design onto said platform.
Any first-hand experiences to share?
Bugs in silicon, toolchain, support, etc.?
I've no experience with that device at all, but from experience with
other toolchains provided by TI over the years, watch out for zero
initialisation of variables in the bss.
For the cortex M4, I'd avoid using TI's toolchain if at all possible.
Download a copy of GCC from ARM.
The general rule I've found to be true for the past 40 years is that
software/tools supplied by silicon vendors is crap.
Some vendor-supplied toolchains are not bad, but some are definitely
subpar - and often many years behind the versions you get from
manufacturer independent suppliers (like ARM's build of a gcc toolchain,
or commercial gcc toolchains). The biggest problem with microcontroller
manufacturer's tools is usually the SDK's that are frequently horrible
in all sorts of ways.

But I agree with your advice - where possible, use ARM's gcc toolchain
build for ARM development. And make sure your project build is
independent of any IDE, whether it is from the vendor or independent.
IDE's are great for coding, and vendor-supplied IDE's can give good
debugging tools, but you want the project build itself to be independent.
Grant Edwards
2024-11-24 17:12:10 UTC
Permalink
Post by David Brown
Some vendor-supplied toolchains are not bad, but some are definitely
subpar - and often many years behind the versions you get from
manufacturer independent suppliers (like ARM's build of a gcc toolchain,
or commercial gcc toolchains). The biggest problem with microcontroller
manufacturer's tools is usually the SDK's that are frequently horrible
in all sorts of ways.
But I agree with your advice - where possible, use ARM's gcc toolchain
build for ARM development. And make sure your project build is
independent of any IDE, whether it is from the vendor or independent.
IDE's are great for coding, and vendor-supplied IDE's can give good
debugging tools, but you want the project build itself to be independent.
What he said, defintely: Avoid vendor-specific IDEs and SDKs like the
plague.

Demo apps and libraries from Silicon vendors are usually awful -- even
worse than the toolchains. I'm pretty sure they're written by interns
who think that to be "professional" it has to incorporate layers and
layers of macros and objects and abstrcation and polymorphism and
whatnot.

As a result I rememeber failing to get a vendor's "hello world" demo
to run on a Cortex-M0+ because it was too large for both the flash and
RAM available on the lower end of the family. And it wsan't even
using "printf" just a "low level" serial port driver that should have
been a few hundred bytes of code but was actually something like
10KB..
Don Y
2024-11-24 21:03:08 UTC
Permalink
[And I'm STILL not seeing your posts. <frown> Something must
be broken in my server. Yet the telephone system seems to be
working properly! Yet another thing to look into...]
Post by Grant Edwards
Post by David Brown
Some vendor-supplied toolchains are not bad, but some are definitely
subpar - and often many years behind the versions you get from
manufacturer independent suppliers (like ARM's build of a gcc toolchain,
or commercial gcc toolchains). The biggest problem with microcontroller
manufacturer's tools is usually the SDK's that are frequently horrible
in all sorts of ways.
I think vendors offer tools for much the same reason as they
write app notes or illustrate "typical applications". Their
goal is to get you USING their product, as quickly as
possible. If you had to hunt for development tools, then
you would likely bias your device selection based on the
availability and cost of said tools.

[I worked with a firm that offered a "development system" (compile/ASM/debug
suite plus hardware ICE, in the days before JTAG and on-chip debug
existed) for an obscure, old processor. They recounted a story where
a customer purchased said tool -- for a fair bit of money. And, promptly
RETURNED it with a *nasty* note complaining about the (low) quality
of the fabrication! Some time later, they REordered the system...
when they discovered there were no other offerings in that market!]

Note the inroads Microchip made (esp with hobbyists) by offering
free/low cost tools for their devices. I suspect their devices
were not the ideal choices for those applications -- but, the value
of HAVING the tools without spending kilobucks to buy them weighed
heavily in their decisions!
Post by Grant Edwards
Post by David Brown
But I agree with your advice - where possible, use ARM's gcc toolchain
build for ARM development. And make sure your project build is
independent of any IDE, whether it is from the vendor or independent.
IDE's are great for coding, and vendor-supplied IDE's can give good
debugging tools, but you want the project build itself to be independent.
This is a given, regardless. "Sole source" suppliers are always a risk.
Moving from one ARM to another (vs a totally different architecture)
saves a lot -- but, you are still stuck with the choices the fab made
in what they decided to offer.
Post by Grant Edwards
What he said, defintely: Avoid vendor-specific IDEs and SDKs like the
plague.
Demo apps and libraries from Silicon vendors are usually awful -- even
worse than the toolchains. I'm pretty sure they're written by interns
who think that to be "professional" it has to incorporate layers and
layers of macros and objects and abstrcation and polymorphism and
whatnot.
The same is often true of "app notes" for hardware components.

I interviewed a prospective client about a project. Somewhere
along the line he "proudly" presented his proposed "solution"
(then, what do you need ME for?).

I looked at it (schematic) and replied: "This won't work."
I.e., there were signals with no driving sources! He then
admitted to copying it from an app note (said reproduction
later verified to have been accurate; the app note was in error!)

But, you aren't likely going to RUN those apps. And, libraries
can be rebuilt and redesigned. So, you aren't at their "mercy"
for those things.

OTOH, if the vendor has some knowledge of a device defect (that
they aren't eager to publicize -- "trade secrets") but their
tools are aware of it and SILENTLY work-around it, then they
have a leg up on a third-party who is trying to DEDUCE how
the device works from the PUBLISHED knowledge (and personal
observations).

E.g., I would be happier knowing that a compiler would avoid
generating code that could tickle vulnerabilities in a
system (e.g., memory) over one that blindly strives for
performance (or ignorance).

Or, a compiler that knows enough about the SPECIFIC processor
(not just the FAMILY) to know how to more finely optimize its
scheduling of instructions.

[The days of expecting the code to just implement the expressed
algorithm are long past. "What ELSE are you going to do FOR me?"]
Post by Grant Edwards
As a result I rememeber failing to get a vendor's "hello world" demo
to run on a Cortex-M0+ because it was too large for both the flash and
RAM available on the lower end of the family. And it wsan't even
using "printf" just a "low level" serial port driver that should have
been a few hundred bytes of code but was actually something like
10KB..
But, if they published that code, you could inspect it, determine
what it was TRYING to do and fix it (or, take a lesson from it).

It's amusing when these sorts of things are treated as "proprietary
information" ("secret"). There's nothing revolutionary in a GENERIC
standard library implementation (though there are varying degrees
of performance that can be obtained from those that are SPECIALIZED)
David Brown
2024-11-25 10:41:58 UTC
Permalink
[And I'm STILL not seeing your posts.  <frown>  Something must
be broken in my server.  Yet the telephone system seems to be
working properly!  Yet another thing to look into...]
Perhaps Grant could re-post for me here?


Maybe you killfiled me?

I think you are using news.eternal-september.org for your server, which
is the same as me.
Post by Grant Edwards
Post by David Brown
Some vendor-supplied toolchains are not bad, but some are definitely
subpar - and often many years behind the versions you get from
manufacturer independent suppliers (like ARM's build of a gcc toolchain,
or commercial gcc toolchains).  The biggest problem with microcontroller
manufacturer's tools is usually the SDK's that are frequently horrible
in all sorts of ways.
I think vendors offer tools for much the same reason as they
write app notes or illustrate "typical applications".  Their
goal is to get you USING their product, as quickly as
possible.  If you  had to hunt for development tools, then
you would likely bias your device selection based on the
availability and cost of said tools.
Yes.

Sometimes I think they could put more effort into making sure you want
to /keep/ using their products!
[I worked with a firm that offered a "development system"
(compile/ASM/debug
suite plus hardware ICE, in the days before JTAG and on-chip debug
existed) for an obscure, old processor.  They recounted a story where
a customer purchased said tool -- for a fair bit of money.  And, promptly
RETURNED it with a *nasty* note complaining about the (low) quality
of the fabrication!  Some time later, they REordered the system...
when they discovered there were no other offerings in that market!]
That's it - you don't have to be good to conquer a market, you just have
to be the best available.
Note the inroads Microchip made (esp with hobbyists) by offering
free/low cost tools for their devices.  I suspect their devices
were not the ideal choices for those applications -- but, the value
of HAVING the tools without spending kilobucks to buy them weighed
heavily in their decisions!
The success of Microchip here has always confused me. Their hardware
development tools were not good or particularly cheap when I used them
in the late 1990's. Their software development tools were terrible. I
remember IDE's that wouldn't work on newer PC's, only a fairly basic
assembler with very limited debug support, and C compilers that were
extraordinarily expensive, full of bugs, incompatibilities and crippling
limitations.

They did have a few things going for them - the PIC microcontrollers
were very robust, came in hobby-friendly packages, and never went out of
production. But their tools, beyond the lowest-level basics, were
expensive and very poor quality, even compared to the competition at the
time. They still are - I don't know any other manufacturer that still
charges for toolchains. And their prices are obscene - $1.6k per year
to enable optimisation on their packaging of gcc-based toolchains for
ARM and MIPs. The only effort Microchip have made to justify the price
is all their work in trying to make it look like they wrote the compiler
and it's not just gcc with added licensing locks.
Post by Grant Edwards
Post by David Brown
But I agree with your advice - where possible, use ARM's gcc toolchain
build for ARM development.  And make sure your project build is
independent of any IDE, whether it is from the vendor or independent.
IDE's are great for coding, and vendor-supplied IDE's can give good
debugging tools, but you want the project build itself to be
independent.
This is a given, regardless.  "Sole source" suppliers are always a risk.
Moving from one ARM to another (vs a totally different architecture)
saves a lot -- but, you are still stuck with the choices the fab made
in what they decided to offer.
Post by Grant Edwards
What he said, defintely: Avoid vendor-specific IDEs and SDKs like the
plague.
Demo apps and libraries from Silicon vendors are usually awful -- even
worse than the toolchains. I'm pretty sure they're written by interns
who think that to be "professional" it has to incorporate layers and
layers of macros and objects and abstrcation and polymorphism and
whatnot.
The same is often true of "app notes" for hardware components.
I interviewed a prospective client about a project.  Somewhere
along the line he "proudly" presented his proposed "solution"
(then, what do you need ME for?).
I looked at it (schematic) and replied:  "This won't work."
I.e., there were signals with no driving sources!  He then
admitted to copying it from an app note (said reproduction
later verified to have been accurate; the app note was in error!)
But, you aren't likely going to RUN those apps.  And, libraries
can be rebuilt and redesigned.  So, you aren't at their "mercy"
for those things.
OTOH, if the vendor has some knowledge of a device defect (that
they aren't eager to publicize -- "trade secrets") but their
tools are aware of it and SILENTLY work-around it, then they
have a leg up on a third-party who is trying to DEDUCE how
the device works from the PUBLISHED knowledge (and personal
observations).
Call me naïve, but I don't see this happening much. The manufacturers
we use for microcontrollers tend to be quite open about defects and
workarounds, as are ARM (and as I wrote earlier, for the Cortex-M
devices the cores come ready-made from ARM, with a lot less scope for
vendor-specific bugs). A vendor that tried to hide a known defect in an
ARM core would suffer - they would get caught, and getting on the bad
side of ARM is not worth it.

But it is certainly possible that a vendor supplied toolchain build will
have the workaround before it has made it into other toolchains. Hiding
defects or lying about them is bad - but being first to have a fix is fine.
E.g., I would be happier knowing that a compiler would avoid
generating code that could tickle vulnerabilities in a
system (e.g., memory) over one that blindly strives for
performance (or ignorance).
Sure. But you are worrying about nothing, I think. Have you ever seen
a compiler where you know the developers /intentionally/ ignored flaws
or incorrect code generation?
Or, a compiler that knows enough about the SPECIFIC processor
(not just the FAMILY) to know how to more finely optimize its
scheduling of instructions.
That's a good reason for using compiler builds from ARM, rather than
microcontroller vendors - they know the cpus better. Target-specific
optimisations are always passed on to mainline gcc (and clang/llvm), but
can be in ARM's builds earlier. A vendor that builds its own toolchains
might pull in these patches too, but they might not. (Most vendors of
ARM microcontrollers use the compiler builds from ARM, but these might
be a bit dated.)
[The days of expecting the code to just implement the expressed
algorithm are long past.  "What ELSE are you going to do FOR me?"]
Post by Grant Edwards
As a result I rememeber failing to get a vendor's "hello world" demo
to run on a Cortex-M0+ because it was too large for both the flash and
RAM available on the lower end of the family.  And it wsan't even
using "printf" just a "low level" serial port driver that should have
been a few hundred bytes of code but was actually something like
10KB..
But, if they published that code, you could inspect it, determine
what it was TRYING to do and fix it (or, take a lesson from it).
It's amusing when these sorts of things are treated as "proprietary
information" ("secret").  There's nothing revolutionary in a GENERIC
standard library implementation (though there are varying degrees
of performance that can be obtained from those that are SPECIALIZED)
David Brown
2024-11-25 08:37:47 UTC
Permalink
Post by Grant Edwards
Post by David Brown
Some vendor-supplied toolchains are not bad, but some are definitely
subpar - and often many years behind the versions you get from
manufacturer independent suppliers (like ARM's build of a gcc toolchain,
or commercial gcc toolchains). The biggest problem with microcontroller
manufacturer's tools is usually the SDK's that are frequently horrible
in all sorts of ways.
But I agree with your advice - where possible, use ARM's gcc toolchain
build for ARM development. And make sure your project build is
independent of any IDE, whether it is from the vendor or independent.
IDE's are great for coding, and vendor-supplied IDE's can give good
debugging tools, but you want the project build itself to be independent.
What he said, defintely: Avoid vendor-specific IDEs and SDKs like the
plague.
I didn't /quite/ say that. Avoid relying on them for your builds, was
what I said. The last thing you want is your project being dependent on
whatever version of the IDE and SDK the supplier provides at a given
time. For a serious project, you want to be able to take a new
computer, check out the old source, install the specific toolchain you
use for the project, and re-build to get exactly the same binary. And
you want to be able to do that a decade later.

I /do/ make use of parts of SDKs. But I figure out what I need, and
copy the source into my project structure. The main point is to avoid
having your project change because it is linked to some SDK stuff that
gets updated by the manufacturer's tools. Library updates happen when
/I/ want them to happen, not on someone else's schedule.

Often, the actual SDK stuff is very inefficient and bizarrely
structured. But it can be convenient for things like initialisation of
complex peripherals, and that's fine - you don't (normally) need
efficiency during initialisation. For real-time access - setting your
pwm values, using your gpio's, etc., - it is usually best to handle
things "manually".

And of course the project build gets controlled by external tools. I
like hand-written makefiles, but cmake or whatever suits is fine.

Vendor-supplied tools can be very useful for debugging, however, as well
as being convenient for startup code, device initialisation, examples,
and so on. I am not at all against using them - use every tool you can
get hold of that makes your job easier! But don't make your actual
project binaries dependent on them.
Post by Grant Edwards
Demo apps and libraries from Silicon vendors are usually awful -- even
worse than the toolchains. I'm pretty sure they're written by interns
who think that to be "professional" it has to incorporate layers and
layers of macros and objects and abstrcation and polymorphism and
whatnot.
Often the demos are terrible, I agree. And they are often wildly
inconsistent. But that doesn't mean they are completely useless - they
can be inspirational too, and can be helpful to see what you are missing
when your own code doesn't work.
Post by Grant Edwards
As a result I rememeber failing to get a vendor's "hello world" demo
to run on a Cortex-M0+ because it was too large for both the flash and
RAM available on the lower end of the family. And it wsan't even
using "printf" just a "low level" serial port driver that should have
been a few hundred bytes of code but was actually something like
10KB..
I think the clearest "war story" I have seen of that kind was for a tiny
8-bit device from Freescale. The chip had perhaps 2K of flash. Since
this was a one-off use of the chip and I didn't want to read more
manuals than I needed to, I use the vendor IDE "wizard" to make the
initialisation code and a "driver" for the ADC. The resulting code was
about 3.5 KB - for the 2 KB microcontroller. So I read the manual and
found that the ADC peripheral needed one single bit set to make it run
as I needed.
Grant Edwards
2024-11-25 15:18:35 UTC
Permalink
Post by David Brown
I think the clearest "war story" I have seen of that kind was for a
tiny 8-bit device from Freescale. The chip had perhaps 2K of flash.
Since this was a one-off use of the chip and I didn't want to read
more manuals than I needed to, I use the vendor IDE "wizard" to make
the initialisation code and a "driver" for the ADC. The resulting
code was about 3.5 KB - for the 2 KB microcontroller. So I read the
manual and found that the ADC peripheral needed one single bit set
to make it run as I needed.
One of the problems that the SDK authors have to try to deal with is
that the peripherals have gotten "too" versatile. They've got a
polled mode, an interrupt driven mode, a DMA mode, a burst mode, a
batch mode, a right-side up mode, an updisde-down mode, and three
compatibility modes so they'll work like products from 40 years ago.

The SDK "drivers" always try to support all the modes in all possible
combinations and sometimes even allow you to switch back and forth
while the thing is running. They'll have open() and close() methods
for no apparent reason for devices that don't need to be opened or
closed.

As a result, you end up with 3.5KB of driver for an ADC.

Then, in the real world. 99.99% of applicatoins only use one very
sepecific mode. You either read the hardware description and write
code from scratch (though the #defines for register offsets are
useful), or you pick your way through the "driver" code for you
particular set of modes/choices to find the handful of lines of code
that you need.

Rather than providing a full-up "driver" I would find it a _lot_ more
useful if they just provided documented code snippets to perform some
of the basic operations that one needs to perform.
David Brown
2024-11-25 16:33:29 UTC
Permalink
Post by Grant Edwards
Post by David Brown
I think the clearest "war story" I have seen of that kind was for a
tiny 8-bit device from Freescale. The chip had perhaps 2K of flash.
Since this was a one-off use of the chip and I didn't want to read
more manuals than I needed to, I use the vendor IDE "wizard" to make
the initialisation code and a "driver" for the ADC. The resulting
code was about 3.5 KB - for the 2 KB microcontroller. So I read the
manual and found that the ADC peripheral needed one single bit set
to make it run as I needed.
One of the problems that the SDK authors have to try to deal with is
that the peripherals have gotten "too" versatile. They've got a
polled mode, an interrupt driven mode, a DMA mode, a burst mode, a
batch mode, a right-side up mode, an updisde-down mode, and three
compatibility modes so they'll work like products from 40 years ago.
The SDK "drivers" always try to support all the modes in all possible
combinations and sometimes even allow you to switch back and forth
while the thing is running. They'll have open() and close() methods
for no apparent reason for devices that don't need to be opened or
closed.
As a result, you end up with 3.5KB of driver for an ADC.
Then, in the real world. 99.99% of applicatoins only use one very
sepecific mode. You either read the hardware description and write
code from scratch (though the #defines for register offsets are
useful), or you pick your way through the "driver" code for you
particular set of modes/choices to find the handful of lines of code
that you need.
Rather than providing a full-up "driver" I would find it a _lot_ more
useful if they just provided documented code snippets to perform some
of the basic operations that one needs to perform.
You are absolutely correct here.

Manufacturers should employ some developers with experience actually
/using/ microcontrollers. The hardware designers should not be allowed
to put in any features that don't pass a "yes, I'd use that" test. Then
the SDK designers (if such people exist) would get similar reality checks.

And the SDK people should learn that the year is 2024. We don't need
C90 compatibility - we want decent /modern/ interfaces using all useful
the bells and whistles from C17, C++20, and gcc extensions so we can
write safer, clearer and more efficient code. Leave the shitty macros,
void* pointers and run-time checking of compile-time values behind.
Don Y
2024-11-25 20:47:41 UTC
Permalink
Post by Grant Edwards
Post by David Brown
I think the clearest "war story" I have seen of that kind was for a
tiny 8-bit device from Freescale. The chip had perhaps 2K of flash.
Since this was a one-off use of the chip and I didn't want to read
more manuals than I needed to, I use the vendor IDE "wizard" to make
the initialisation code and a "driver" for the ADC. The resulting
code was about 3.5 KB - for the 2 KB microcontroller. So I read the
manual and found that the ADC peripheral needed one single bit set
to make it run as I needed.
One of the problems that the SDK authors have to try to deal with is
that the peripherals have gotten "too" versatile. They've got a
polled mode, an interrupt driven mode, a DMA mode, a burst mode, a
batch mode, a right-side up mode, an updisde-down mode, and three
compatibility modes so they'll work like products from 40 years ago.
The SDK "drivers" always try to support all the modes in all possible
combinations and sometimes even allow you to switch back and forth
while the thing is running. They'll have open() and close() methods
for no apparent reason for devices that don't need to be opened or
closed.
As a result, you end up with 3.5KB of driver for an ADC.
To be fair, they often have to include the initialization/configuration
code that a "real" system would have put elsewhere, hiding the true
size of the code required to use the peripheral.
Post by Grant Edwards
Then, in the real world. 99.99% of applicatoins only use one very
sepecific mode. You either read the hardware description and write
code from scratch (though the #defines for register offsets are
useful), or you pick your way through the "driver" code for you
particular set of modes/choices to find the handful of lines of code
that you need.
Rather than providing a full-up "driver" I would find it a _lot_ more
useful if they just provided documented code snippets to perform some
of the basic operations that one needs to perform.
Yes.

Though the SDK has value if only for SUGGESTING names for the various
symbolic constants needed to configure the device.

The AM62x family reference manual is 16000 pages -- 12000 of those
are "register descriptions (tabulated). That's a shitload of
#defines to have to create from scratch!

Given that the SDK had to come up with unique names for all of them,
it's a headstart in designing your own naming convention for the device.

[I'm going to bow out of this discussion as there is something clearly
wrong with my server/attendant -- and fixing it is not a current priority.
The fourth quarter is traditionally my "update equipment, applications,
licenses" period along with the various birthdays, anniversaries and
holidays that crowd into those three months. The NNTP server/attendant
wasn't on the list for an upgrade as I *thought* it was working well
(seems to be for other newsgroups and the phone attendant hasn't had
any problems!)

I'll see if I can rescue another box and rebuild it (silly not to
take advantage of that need to also upgrade the hardware!)]
pozz
2024-11-26 12:10:31 UTC
Permalink
Post by Grant Edwards
Post by David Brown
Some vendor-supplied toolchains are not bad, but some are definitely
subpar - and often many years behind the versions you get from
manufacturer independent suppliers (like ARM's build of a gcc toolchain,
or commercial gcc toolchains).  The biggest problem with microcontroller
manufacturer's tools is usually the SDK's that are frequently horrible
in all sorts of ways.
But I agree with your advice - where possible, use ARM's gcc toolchain
build for ARM development.  And make sure your project build is
independent of any IDE, whether it is from the vendor or independent.
IDE's are great for coding, and vendor-supplied IDE's can give good
debugging tools, but you want the project build itself to be
independent.
What he said, defintely: Avoid vendor-specific IDEs and SDKs like the
plague.
I didn't /quite/ say that.  Avoid relying on them for your builds, was
what I said.  The last thing you want is your project being dependent on
whatever version of the IDE and SDK the supplier provides at a given
time.  For a serious project, you want to be able to take a new
computer, check out the old source, install the specific toolchain you
use for the project, and re-build to get exactly the same binary.  And
you want to be able to do that a decade later.
I /do/ make use of parts of SDKs.  But I figure out what I need, and
copy the source into my project structure.  The main point is to avoid
having your project change because it is linked to some SDK stuff that
gets updated by the manufacturer's tools.  Library updates happen when
/I/ want them to happen, not on someone else's schedule.
Often, the actual SDK stuff is very inefficient and bizarrely
structured.  But it can be convenient for things like initialisation of
complex peripherals, and that's fine - you don't (normally) need
efficiency during initialisation.  For real-time access - setting your
pwm values, using your gpio's, etc., - it is usually best to handle
things "manually".
And of course the project build gets controlled by external tools.  I
like hand-written makefiles, but cmake or whatever suits is fine.
Vendor-supplied tools can be very useful for debugging, however, as well
as being convenient for startup code, device initialisation, examples,
and so on.  I am not at all against using them - use every tool you can
get hold of that makes your job easier!  But don't make your actual
project binaries dependent on them.
Unfortunately newer MCUs are little monster with a pletora of complex
peripherals (USB, Ethernet MAC, clocks, ...), multitude of pins with
several functions and so on.

I think it's quite difficult to write an initialization procedure from
scratch that configure correctly everything.
So I use the tool that the vendor gives me. Many times it's an SDK with
functions to call and, in this case, I try to configure the project to
use "local copy" of the SDK (as you wrote, I copy the SDK source code in
my project folder).

However, newer tools often generate code at runtime, after you configure
the system by using a graphical interface. In this scenario, the tool
usually reads the configuration from a file and can re-generate the same
source code from it.
Here you /have/ a local copy of the source code generated by the tool,
but what happens if you want to change something later, maybe after some
years?

First of all, you need to save the configuration file. Then you need the
same graphical tool (same version as the original, if you want the same
source code) and make all the changes you need.
Maybe you want to upgrade the tool if it solved some bugs in the meantime.

You can oganize your project so that you need only make, gcc and ld for
the bulding process, but you can't be sure you wont' ever need to use
the the vendor tools again (if you needed to use them one time, you
could need them again in the future). So you are forced to add
configuration files of these tools to the repository and instructions
how to use these tools to obtain the same source code.
Maybe you need to save the installer of these tools too.
Post by Grant Edwards
Demo apps and libraries from Silicon vendors are usually awful -- even
worse than the toolchains. I'm pretty sure they're written by interns
who think that to be "professional" it has to incorporate layers and
layers of macros and objects and abstrcation and polymorphism and
whatnot.
Often the demos are terrible, I agree.  And they are often wildly
inconsistent.  But that doesn't mean they are completely useless - they
can be inspirational too, and can be helpful to see what you are missing
when your own code doesn't work.
Post by Grant Edwards
As a result I rememeber failing to get a vendor's "hello world" demo
to run on a Cortex-M0+ because it was too large for both the flash and
RAM available on the lower end of the family.  And it wsan't even
using "printf" just a "low level" serial port driver that should have
been a few hundred bytes of code but was actually something like
10KB..
I think the clearest "war story" I have seen of that kind was for a tiny
8-bit device from Freescale.  The chip had perhaps 2K of flash.  Since
this was a one-off use of the chip and I didn't want to read more
manuals than I needed to, I use the vendor IDE "wizard" to make the
initialisation code and a "driver" for the ADC.  The resulting code was
about 3.5 KB - for the 2 KB microcontroller.  So I read the manual and
found that the ADC peripheral needed one single bit set to make it run
as I needed.
Loading...