Discussion:
C- Syntax to allocate Global variables to consecutive memory locations
(too old to reply)
t***@gmail.com
2007-04-09 05:53:41 UTC
Permalink
Hello all

I'm a fresher to Microcontroller programming. I have defined certain
global variables that I need to store at consecutive memory locations,
say starting from 0x8000h . Will appreciate if anyone can list the
syntax / instructions to go about the same.

thanks

techie
David Empson
2007-04-09 07:50:05 UTC
Permalink
Post by t***@gmail.com
I'm a fresher to Microcontroller programming. I have defined certain
global variables that I need to store at consecutive memory locations,
say starting from 0x8000h . Will appreciate if anyone can list the
syntax / instructions to go about the same.
This is not a feature of standard C. Some C compilers for embedded
platforms have their own proprietary methods of doing this sort of
thing. In other cases you might have to resort to declaring variables in
an assembly language source file to ensure they appear at a known memory
location.

In some cases, you can rely on known behaviour of a specific compiler so
that a particular order of variable declaration will result in a
particular memory layout.

In any case, you might need to give special instructions to the linker
to reserve a memory area for an unusual purpose.

None of these mechanisms are portable to other platforms or compilers.

What microprocessor and compiler/toolchain are you using?
--
David Empson
***@actrix.gen.nz
David R Brooks
2007-04-09 09:44:16 UTC
Permalink
Post by David Empson
Post by t***@gmail.com
I'm a fresher to Microcontroller programming. I have defined certain
global variables that I need to store at consecutive memory locations,
say starting from 0x8000h . Will appreciate if anyone can list the
syntax / instructions to go about the same.
This is not a feature of standard C. Some C compilers for embedded
platforms have their own proprietary methods of doing this sort of
thing. In other cases you might have to resort to declaring variables in
an assembly language source file to ensure they appear at a known memory
location.
In some cases, you can rely on known behaviour of a specific compiler so
that a particular order of variable declaration will result in a
particular memory layout.
In any case, you might need to give special instructions to the linker
to reserve a memory area for an unusual purpose.
None of these mechanisms are portable to other platforms or compilers.
What microprocessor and compiler/toolchain are you using?
One way (rather awkward, but works) would be to address them entirely
via pointers: eg
typedef volatile int* pInt;
pInt my1 = 0x8000;
pInt my2 = 0x8004; // Strictly, 0x8000 + sizeof(int)
...
Hans-Bernhard Bröker
2007-04-09 21:59:27 UTC
Permalink
Post by David R Brooks
One way (rather awkward, but works) would be to address them entirely
To be nitpickingly precise: no, that doesn't really work. There's no
well-defined mapping from integers to pointers, so any code that casts
an integer constant to a pointer type and proceeds to dereference that
pointer exhibits undefined behaviour. Which is as far from "works" as
one can possibly be and still _sometimes_ get a working program.
Rob Windgassen
2007-04-10 10:56:18 UTC
Permalink
Post by Hans-Bernhard Bröker
Post by David R Brooks
One way (rather awkward, but works) would be to address them entirely
To be nitpickingly precise: no, that doesn't really work. There's no
well-defined mapping from integers to pointers, so any code that casts
an integer constant to a pointer type and proceeds to dereference that
pointer exhibits undefined behaviour. Which is as far from "works" as
one can possibly be and still _sometimes_ get a working program.
Just another nitpicking...
Casting integers to pointers are implementation defined rather than
undefined...

<quote 6.3.2.3 from C99>
3 An integer constant expression with the value 0, or such an expression
cast to type void *, is called a null pointer constant. If a null pointer
constant is converted to a pointer type, the resulting pointer, called a
null pointer, is guaranteed to compare unequal to a pointer to any object
or function.

<...>

5 An integer may be converted to any pointer type. Except as previously
specified, the result is implementation-defined, might not be correctly
aligned, might not point to an entity of the referenced type, and might be
a trap representation.
</quote>

Rob
Hans-Bernhard Bröker
2007-04-10 23:32:54 UTC
Permalink
Post by Rob Windgassen
Post by Hans-Bernhard Bröker
To be nitpickingly precise: no, that doesn't really work. There's no
well-defined mapping from integers to pointers, so any code that casts
an integer constant to a pointer type and proceeds to dereference that
pointer exhibits undefined behaviour. Which is as far from "works" as
one can possibly be and still _sometimes_ get a working program.
Just another nitpicking...
Casting integers to pointers are implementation defined rather than
undefined...
You stopped reading too soon. I included the part "and proceeds to
dereference that pointer" in that statement for a reason.

[...]
Post by Rob Windgassen
5 An integer may be converted to any pointer type. Except as previously
specified, the result is implementation-defined, might not be correctly
aligned, might not point to an entity of the referenced type, and might be
a trap representation.
Exactly. And now you have to look up what happens if you dereference a
pointer that currently is a trap representation. Guess what: that
causes undefined behaviour.
Rob Windgassen
2007-04-11 21:54:48 UTC
Permalink
Post by Hans-Bernhard Bröker
Post by Rob Windgassen
Post by Hans-Bernhard Bröker
To be nitpickingly precise: no, that doesn't really work. There's no
well-defined mapping from integers to pointers, so any code that casts
an integer constant to a pointer type and proceeds to dereference that
pointer exhibits undefined behaviour. Which is as far from "works" as
one can possibly be and still _sometimes_ get a working program.
Just another nitpicking...
Casting integers to pointers are implementation defined rather than
undefined...
You stopped reading too soon. I included the part "and proceeds to
dereference that pointer" in that statement for a reason.
[...]
Post by Rob Windgassen
5 An integer may be converted to any pointer type. Except as previously
specified, the result is implementation-defined, might not be correctly
aligned, might not point to an entity of the referenced type, and might be
a trap representation.
Note: above sentence contains three times the word 'might'.
Post by Hans-Bernhard Bröker
Exactly. And now you have to look up what happens if you dereference a
pointer that currently is a trap representation. Guess what: that
causes undefined behaviour.
The word _might_ occurs three times in the quoted sentence.
You are reasoning as if at least one of them is a _must_, which is not
correct.

Rob
Hans-Bernhard Bröker
2007-04-11 22:52:36 UTC
Permalink
Post by Rob Windgassen
Post by Hans-Bernhard Bröker
Exactly. And now you have to look up what happens if you dereference a
pointer that currently is a trap representation. Guess what: that
causes undefined behaviour.
The word _might_ occurs three times in the quoted sentence.
You are reasoning as if at least one of them is a _must_, which is not
correct.
No, that's not at all what I'm arguing. The point is this: if a piece
of code has any possibility of leading to undefined behaviour, then its
behaviour is rather obviouslz not "defined". Which means that such a
piece of code exhibits undefined behaviour. It's as simple as that.

Behaviour of a piece of code is either defined (by the language standard
or the implementation), or it's not. Tertium non datur.
Rob Windgassen
2007-04-12 22:31:42 UTC
Permalink
Post by Hans-Bernhard Bröker
Post by Hans-Bernhard Bröker
Exactly. And now you have to look up what happens if you dereference
a pointer that currently is a trap representation. Guess what: that
causes undefined behaviour.
The word _might_ occurs three times in the quoted sentence. You are
reasoning as if at least one of them is a _must_, which is not correct.
No, that's not at all what I'm arguing. The point is this: if a piece
of code has any possibility of leading to undefined behaviour,
Casting integers to pointers is implementation defined. That means you
have to check the documentation of your implementation.
Many compilers in the embedded world support this.
Using a valid integer value (correct alignment, maps onto valid address,
...) will not result in undefined behaviour.
It's as simple as that.

Of course, using a wrong integer value is another story. Preventing that
is not a matter of 'possibility' but rather of proper design and
implementation of the embedded system.

The resulting code may not work for another compiler and/or another
platform, but this is c.a.e. isn't it?


Rob
t***@gmail.com
2007-04-09 11:24:19 UTC
Permalink
Post by David Empson
This is not a feature of standard C. Some C compilers for embedded
platforms have their own proprietary methods of doing this sort of
thing. In other cases you might have to resort to declaring variables in
an assembly language source file to ensure they appear at a known memory
location.
In some cases, you can rely on known behaviour of a specific compiler so
that a particular order of variable declaration will result in a
particular memory layout.
In any case, you might need to give special instructions to the linker
to reserve a memory area for an unusual purpose.
None of these mechanisms are portable to other platforms or compilers.
What microprocessor and compiler/toolchain are you using?
Well, I'm using the Code composer studio from Texas Instruments.
So, do u then suggest that I carry out those routines in Assembly
only ???

Also, is the method of using Pointers like suggested above safe ???
what sort of awkardness would that be causing to my program ???

thanks

techie
Deep Reset
2007-04-09 11:44:37 UTC
Permalink
Post by t***@gmail.com
Post by David Empson
This is not a feature of standard C. Some C compilers for embedded
platforms have their own proprietary methods of doing this sort of
thing. In other cases you might have to resort to declaring variables in
an assembly language source file to ensure they appear at a known memory
location.
In some cases, you can rely on known behaviour of a specific compiler so
that a particular order of variable declaration will result in a
particular memory layout.
In any case, you might need to give special instructions to the linker
to reserve a memory area for an unusual purpose.
None of these mechanisms are portable to other platforms or compilers.
What microprocessor and compiler/toolchain are you using?
Well, I'm using the Code composer studio from Texas Instruments.
So, do u then suggest that I carry out those routines in Assembly
only ???
Also, is the method of using Pointers like suggested above safe ???
what sort of awkardness would that be causing to my program ???
thanks
techie
Techie,
you might want to take a look at your keyboard - it looks like your question
mark key is a little sticky.

Deep.
R Pradeep Chandran
2007-04-09 16:45:47 UTC
Permalink
Post by t***@gmail.com
Well, I'm using the Code composer studio from Texas Instruments.
So, do u then suggest that I carry out those routines in Assembly
only ???
Not necessarily. See Vladimir Vassilevsky's reply later in the thread.
I am just expanding his response.

Many of the cross compilers for embedded systems allow you to define
memory sections. These definitions usually go into the linker control
files. If your tool chain comes with an IDE, it will usually allow you
to define it using the IDE without having to modify the linker control
files directly. You can specify the memory range and a name for the
section. Please note that the tool chain will have some predefined
sections. Ensure that your definitions do not conflict with these. The
compiler will provide a proprietary extension (perhaps using pragma
directives) to specify that a variable belongs to a section. If you
define more variables than that could be accommodated in the section,
you usually get a linker error.

If you want to make sure that the variables are located in a specific
order, you have two options as pointed out by Vladimir.

1) If all the variables are of the same type, use an array. If you
want to refer to individual variables by name (and not an array
index), declare pointers to them.

2) If the variables are of different types, use a structure and
declare the variables in the order in which you want them to be
stored. You have to make sure that the structure padding is
acceptable. Most cross compilers allow you to specify the padding.
Once the structure is defined, declare a variable of this structure
type and put it in the section.
Post by t***@gmail.com
Also, is the method of using Pointers like suggested above safe ???
what sort of awkardness would that be causing to my program ???
If you are referring to the post by David R Brooks, the method would
work in many cases. However, you may run into problems with the linker
allocating other variables at this location. I strongly recommend
against that approach especially if you are new to embedded
programming.

These are a few directions to get you started. There are many other
issues that might come up later. Refer the documentation of your tool
chain for the inner details.

Have a nice day,
Pradeep
--
All opinions are mine and do not represent the views or
policies of my employer.
R Pradeep Chandran rpc AT pobox DOT com
TheDoc
2007-04-10 00:00:34 UTC
Permalink
Post by t***@gmail.com
Post by David Empson
This is not a feature of standard C. Some C compilers for embedded
platforms have their own proprietary methods of doing this sort of
thing. In other cases you might have to resort to declaring variables in
an assembly language source file to ensure they appear at a known memory
location.
In some cases, you can rely on known behaviour of a specific compiler so
that a particular order of variable declaration will result in a
particular memory layout.
In any case, you might need to give special instructions to the linker
to reserve a memory area for an unusual purpose.
None of these mechanisms are portable to other platforms or compilers.
What microprocessor and compiler/toolchain are you using?
Well, I'm using the Code composer studio from Texas Instruments.
So, do u then suggest that I carry out those routines in Assembly
only ???
Also, is the method of using Pointers like suggested above safe ???
what sort of awkardness would that be causing to my program ???
thanks
techie
I an earlier post you say you "need" to store the variables in 0x8000 etc..

why do you "need" to do this.. and I assume from TI CCS, you have a dsp ?

I have used CCS on many targets and i have never had the "need" to
explicitly place variables at any specific location..

if you must do this then define a linker segment/section at that address and
put your variables there..



please elaborate...
Walter Banks
2007-04-09 10:47:40 UTC
Permalink
Post by t***@gmail.com
I need to store at consecutive memory locations,
say starting from 0x8000h . Will appreciate if anyone can list the
syntax / instructions to go about the same.
Byte Craft's compilers have a language extension that allows
defininng the location of variables.

int abc @ 0x123;

for example.

w..
--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com
Jonathan Kirwan
2007-04-09 18:04:23 UTC
Permalink
On Mon, 09 Apr 2007 06:47:40 -0400, Walter Banks
Post by Walter Banks
Post by t***@gmail.com
I need to store at consecutive memory locations,
say starting from 0x8000h . Will appreciate if anyone can list the
syntax / instructions to go about the same.
Byte Craft's compilers have a language extension that allows
defininng the location of variables.
for example.
Hi, Walter. Does this look like a storage request to the linker? Or
does it merely create a symbolic linker value?

To put my question in exact context, one of the things I miss having
in C and readily find with assembly coding (usually) is the ability to
define link-time constants. By this, I mean the equivalent in x86 asm
of:

_abc EQU 0x123
PUBLIC _abc

This places a symbolic value in the OBJ file that can be linked into
other files and used as constant values. In c, I might use:

extern int abc;
#define MYCONST (&abc)

and then refer to MYCONST throughout the c source. The result allows
me to modify the constant definitions without requiring recompilation
of the c sources. The link process is all that is required.

If your c compiler creates the object equivalent of a PUBDEF and an
EXTDEF, but no LEDATA, from:

int abc @ 0x123;

That would be interesting.

I hope that sharpens my question.

Jon
Stan Katz
2007-04-09 23:32:52 UTC
Permalink
Post by Jonathan Kirwan
To put my question in exact context, one of the things I miss having
in C and readily find with assembly coding (usually) is the ability to
define link-time constants. By this, I mean the equivalent in x86 asm
_abc EQU 0x123
PUBLIC _abc
This places a symbolic value in the OBJ file that can be linked into
extern int abc;
#define MYCONST (&abc)
and then refer to MYCONST throughout the c source. The result allows
me to modify the constant definitions without requiring recompilation
of the c sources. The link process is all that is required.
The way to use this is with the const keyward, in one module you have the
line

const int abc = 0x123;

In the corresponding header file you have

extern const int abc;

then any module that includes the header fine can refer to the integer
constant abc,
and to change the value of abc you simply change the value in the module
where it
is defined (and assigned), recompile that module and re-link.

Stan Katz
Jonathan Kirwan
2007-04-09 22:49:39 UTC
Permalink
Post by Stan Katz
Post by Jonathan Kirwan
To put my question in exact context, one of the things I miss having
in C and readily find with assembly coding (usually) is the ability to
define link-time constants. By this, I mean the equivalent in x86 asm
_abc EQU 0x123
PUBLIC _abc
This places a symbolic value in the OBJ file that can be linked into
extern int abc;
#define MYCONST (&abc)
and then refer to MYCONST throughout the c source. The result allows
me to modify the constant definitions without requiring recompilation
of the c sources. The link process is all that is required.
The way to use this is with the const keyward, in one module you have the
line
const int abc = 0x123;
In the corresponding header file you have
extern const int abc;
then any module that includes the header fine can refer to the integer
constant abc, and to change the value of abc you simply change the value in the module
where it is defined (and assigned), recompile that module and re-link.
Thanks, Stan. But this doesn't do what I said. Did you read me
carefully?

Jon
Jonathan Kirwan
2007-04-09 22:57:01 UTC
Permalink
Did you read me carefully?
By this, I am referring to the two parts, the "Does this look like a
storage request to the linker? Or does it merely create a symbolic
linker value?" and the "no LEDATA" comments I made.

I'm asking about link-time constants which do NOT generate static
storage.

Jon
R Pradeep Chandran
2007-04-09 23:04:06 UTC
Permalink
Post by Stan Katz
Post by Jonathan Kirwan
To put my question in exact context, one of the things I miss having
in C and readily find with assembly coding (usually) is the ability to
define link-time constants. By this, I mean the equivalent in x86 asm
_abc EQU 0x123
PUBLIC _abc
This places a symbolic value in the OBJ file that can be linked into
extern int abc;
#define MYCONST (&abc)
and then refer to MYCONST throughout the c source. The result allows
me to modify the constant definitions without requiring recompilation
of the c sources. The link process is all that is required.
The way to use this is with the const keyward, in one module you have the
line
const int abc = 0x123;
In the corresponding header file you have
extern const int abc;
then any module that includes the header fine can refer to the integer
constant abc, and to change the value of abc you simply change the
value in the module where it is defined (and assigned), recompile that
module and re-link.
This is what I also thought at first. This approach does require the
compilation of the source file containing the constant.

However, when I thought more about it I found another problem with
this approach. The compiler is not required to assign storage for the
constant. It could replace it with a literal constant (similar to
#define). Declaring it as volatile might help.

volatile const int abc = 0x123;

and

extern volatile const int abc;

Have a nice day,
Pradeep
--
All opinions are mine and do not represent the views or
policies of my employer.
R Pradeep Chandran rpc AT pobox DOT com
R Pradeep Chandran
2007-04-09 22:56:43 UTC
Permalink
On Mon, 09 Apr 2007 18:04:23 GMT, Jonathan Kirwan
Post by Jonathan Kirwan
To put my question in exact context, one of the things I miss having
in C and readily find with assembly coding (usually) is the ability to
define link-time constants. By this, I mean the equivalent in x86 asm
_abc EQU 0x123
PUBLIC _abc
This places a symbolic value in the OBJ file that can be linked into
extern int abc;
#define MYCONST (&abc)
and then refer to MYCONST throughout the c source. The result allows
me to modify the constant definitions without requiring recompilation
of the c sources. The link process is all that is required.
Are you sure that only the link process is required. I think that you
need to compile (or assemble) the file containing the declaration of
abc (or _abc).

Have a nice day,
Pradeep
--
All opinions are mine and do not represent the views or
policies of my employer.
R Pradeep Chandran rpc AT pobox DOT com
Jonathan Kirwan
2007-04-09 23:42:31 UTC
Permalink
Post by R Pradeep Chandran
On Mon, 09 Apr 2007 18:04:23 GMT, Jonathan Kirwan
Post by Jonathan Kirwan
To put my question in exact context, one of the things I miss having
in C and readily find with assembly coding (usually) is the ability to
define link-time constants. By this, I mean the equivalent in x86 asm
_abc EQU 0x123
PUBLIC _abc
This places a symbolic value in the OBJ file that can be linked into
extern int abc;
#define MYCONST (&abc)
and then refer to MYCONST throughout the c source. The result allows
me to modify the constant definitions without requiring recompilation
of the c sources. The link process is all that is required.
Are you sure that only the link process is required.
Yes. Been there, done it.
Post by R Pradeep Chandran
I think that you
need to compile (or assemble) the file containing the declaration of
abc (or _abc).
In the example I gave, where I've used two source files -- one in
assembly and one in c -- changing a constant value requires editing
the assembly code and re-assembly of that file in order to produce the
OBJ file, plus a link step to complete the executable.

My point was that there was no _c_ compilation required -- not that
the file containing the constants didn't need re-assembly.

Normally, in c-only projects, the only type of symbolic named constant
that doesn't require storage is found in the pre-processor's #define.
If you change the value of such symbolic constants, all of the c
source files which reference those constants must be recompiled by the
c compiler.

In the example I gave, no c source file which references those
constants need to be recompiled.

These kinds of constants are nothing new in object files. They've
been in use for the 30+ years I've been programming and they existed
in the linkers I've written in my long past. The current linkers
required for c do this all the time, as well. The only problem is
that there is no syntax in c specifically for this semantic.

People who can only think in terms of the higher level language
semantics they are hand-fed, conflate the ideas I'm talking about. In
c, for example, the language will only generate a link-time symbolic
reference from a declaration where the definition is external to the
module. And when it does so, it depends upon the linker to provide
the actual value for that link-time symbol. The only other case with
c is where the compiler generates a definition (and a declaration) of
the object -- but that requires storage -- and in any case, it again
allows the linker to actually fill in the symbolic value that replaces
other symbolic references, once the linker is finished with placement.
In the assembly language case I mentioned, the assembler is perfectly
free to provide a link-time symbolic constant where the value is
provided by the assembler and not by the linker's location process.
This semantic ability is explicitly missing in c.

A possibly good way to handle this without changing the c standard (at
least, not changing the behavior of resulting code) would be to follow
how a smart c++ compiler and its linker might interpret Stan's
example:

const int abc= 0x123;

In c, this is not only a declaration but also a definition. It
creates storage for 'abc' and initializes it -- in other words, there
exists an address where the value 0x123 is stored, at run-time. If
you placed that into a common include file, used by a variety of c
source files, you'd be in a peck of trouble as each compilation unit
would provide separate definitions. If placed into a single c file
for compilation, storage would be allocated when perhaps none would be
required.

In c++, if my feeble recollection about it remains, it is possible for
the c++ compiler to remove the storage (if the compiler and linker are
able to determine than an address reference is never required) and
simply replace the value. I have one central problem in talking about
c++: I don't use c++ in my embedded work. I also don't know of any
embedded c++ compilers and linker combinations that achieve this, in
practice. But even if they existed it wouldn't change my question
regarding c.

Jon
R Pradeep Chandran
2007-04-10 00:51:05 UTC
Permalink
On Mon, 09 Apr 2007 23:42:31 GMT, Jonathan Kirwan
Post by Jonathan Kirwan
Post by R Pradeep Chandran
On Mon, 09 Apr 2007 18:04:23 GMT, Jonathan Kirwan
<snip>
Post by Jonathan Kirwan
Post by R Pradeep Chandran
Post by Jonathan Kirwan
and then refer to MYCONST throughout the c source. The result allows
me to modify the constant definitions without requiring recompilation
of the c sources. The link process is all that is required.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Post by Jonathan Kirwan
Post by R Pradeep Chandran
Are you sure that only the link process is required.
Yes. Been there, done it.
Post by R Pradeep Chandran
I think that you
need to compile (or assemble) the file containing the declaration of
abc (or _abc).
In the example I gave, where I've used two source files -- one in
assembly and one in c -- changing a constant value requires editing
the assembly code and re-assembly of that file in order to produce the
OBJ file, plus a link step to complete the executable.
My point was that there was no _c_ compilation required -- not that
the file containing the constants didn't need re-assembly.
I agree that no C files need to be compiled. But, you said that only
the link process is required. I was merely pointing out that the
statement you made is not entirely accurate.
Post by Jonathan Kirwan
People who can only think in terms of the higher level language
semantics they are hand-fed, conflate the ideas I'm talking about.
They do, Don't they? :-)
Post by Jonathan Kirwan
In
c, for example, the language will only generate a link-time symbolic
reference from a declaration where the definition is external to the
module. And when it does so, it depends upon the linker to provide
the actual value for that link-time symbol. The only other case with
c is where the compiler generates a definition (and a declaration) of
the object -- but that requires storage -- and in any case, it again
allows the linker to actually fill in the symbolic value that replaces
other symbolic references, once the linker is finished with placement.
In the assembly language case I mentioned, the assembler is perfectly
free to provide a link-time symbolic constant where the value is
provided by the assembler and not by the linker's location process.
This semantic ability is explicitly missing in c.
I agree. It is nice to have this feature especially for platforms with
very small amounts of memory. I guess not too many people on the ISO
committee felt that this feature was required.
Post by Jonathan Kirwan
A possibly good way to handle this without changing the c standard (at
least, not changing the behavior of resulting code) would be to follow
how a smart c++ compiler and its linker might interpret Stan's
const int abc= 0x123;
In c, this is not only a declaration but also a definition. It
creates storage for 'abc' and initializes it -- in other words, there
exists an address where the value 0x123 is stored, at run-time. If
you placed that into a common include file, used by a variety of c
source files, you'd be in a peck of trouble as each compilation unit
would provide separate definitions. If placed into a single c file
for compilation, storage would be allocated when perhaps none would be
required.
An aggressive compiler can optimize this under many conditions. As far
as I can see, the ISO standard for C does not require that storage be
allocated.
Post by Jonathan Kirwan
In c++, if my feeble recollection about it remains, it is possible for
the c++ compiler to remove the storage (if the compiler and linker are
able to determine than an address reference is never required) and
simply replace the value. I have one central problem in talking about
c++: I don't use c++ in my embedded work. I also don't know of any
embedded c++ compilers and linker combinations that achieve this, in
practice. But even if they existed it wouldn't change my question
regarding c.
Tasking C++ compiler for the C166/ST10 platform (v6 r7 if I remember
correctly) does this optimization. In fact, a later version (not sure
which one) does this for C as well.

Also, as far as putting const declarations in header files is
concerned, there is a difference between C and C++.

By default, const has external linkage in C. By default, const has
internal linkage in C++. This is what creates the separate definitions
that you mentioned.

Have a nice day,
Pradeep
--
All opinions are mine and do not represent the views or
policies of my employer.
R Pradeep Chandran rpc AT pobox DOT com
Jonathan Kirwan
2007-04-10 01:23:31 UTC
Permalink
Post by R Pradeep Chandran
On Mon, 09 Apr 2007 23:42:31 GMT, Jonathan Kirwan
Post by Jonathan Kirwan
Post by R Pradeep Chandran
On Mon, 09 Apr 2007 18:04:23 GMT, Jonathan Kirwan
<snip>
Post by Jonathan Kirwan
Post by R Pradeep Chandran
Post by Jonathan Kirwan
and then refer to MYCONST throughout the c source. The result allows
me to modify the constant definitions without requiring recompilation
of the c sources. The link process is all that is required.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Post by Jonathan Kirwan
Post by R Pradeep Chandran
Are you sure that only the link process is required.
Yes. Been there, done it.
Post by R Pradeep Chandran
I think that you
need to compile (or assemble) the file containing the declaration of
abc (or _abc).
In the example I gave, where I've used two source files -- one in
assembly and one in c -- changing a constant value requires editing
the assembly code and re-assembly of that file in order to produce the
OBJ file, plus a link step to complete the executable.
My point was that there was no _c_ compilation required -- not that
the file containing the constants didn't need re-assembly.
I agree that no C files need to be compiled. But, you said that only
the link process is required. I was merely pointing out that the
statement you made is not entirely accurate.
But I said, "... without requiring recompilation of the c sources." I
agree that I am not always clear. But I think I did a fair job here
in being clear. In addition to the above, I provided an explicit
example along with the comment, "... creates the object equivalent of
a PUBDEF and an EXTDEF, but no LEDATA..." It's hard to do more,
frankly. I did work at it. If you correction helps anyone else, I'm
glad you added it.
Post by R Pradeep Chandran
Post by Jonathan Kirwan
People who can only think in terms of the higher level language
semantics they are hand-fed, conflate the ideas I'm talking about.
They do, Don't they? :-)
Post by Jonathan Kirwan
In
c, for example, the language will only generate a link-time symbolic
reference from a declaration where the definition is external to the
module. And when it does so, it depends upon the linker to provide
the actual value for that link-time symbol. The only other case with
c is where the compiler generates a definition (and a declaration) of
the object -- but that requires storage -- and in any case, it again
allows the linker to actually fill in the symbolic value that replaces
other symbolic references, once the linker is finished with placement.
In the assembly language case I mentioned, the assembler is perfectly
free to provide a link-time symbolic constant where the value is
provided by the assembler and not by the linker's location process.
This semantic ability is explicitly missing in c.
I agree. It is nice to have this feature especially for platforms with
very small amounts of memory. I guess not too many people on the ISO
committee felt that this feature was required.
My point wasn't about situations with very small amounts of memory. I
would use this feature on very large systems with hundreds of small
source files, in fact. And use it well.

In any case, I'm sure there were a lot of things on their (ISO) plate
and I am just interested in Walter's comment. I'm not in trying to
make a case for some ISO committee, much as I would like being able to
control the value of link-time symbolics. If I were tilting that
windmill, I've got much better issues to present them which would make
a much larger difference in my life.

(Of course, if I were pressing the issue there I'd probably point out
that the feature could be designed to have added no burden and cost
nothing in terms of the development tools while providing a useful new
semantic. Not that such an argument would have aided the idea.)
Post by R Pradeep Chandran
Post by Jonathan Kirwan
A possibly good way to handle this without changing the c standard (at
least, not changing the behavior of resulting code) would be to follow
how a smart c++ compiler and its linker might interpret Stan's
const int abc= 0x123;
In c, this is not only a declaration but also a definition. It
creates storage for 'abc' and initializes it -- in other words, there
exists an address where the value 0x123 is stored, at run-time. If
you placed that into a common include file, used by a variety of c
source files, you'd be in a peck of trouble as each compilation unit
would provide separate definitions. If placed into a single c file
for compilation, storage would be allocated when perhaps none would be
required.
An aggressive compiler can optimize this under many conditions. As far
as I can see, the ISO standard for C does not require that storage be
allocated.
I disagree. But we'd need to dig into the details to be sure and I
suspect that the compiler vendors would be able to nail that question
far more quickly and decisively than either of us. So if they comment
on this, I'll take their word about it either way.
Post by R Pradeep Chandran
Post by Jonathan Kirwan
In c++, if my feeble recollection about it remains, it is possible for
the c++ compiler to remove the storage (if the compiler and linker are
able to determine than an address reference is never required) and
simply replace the value. I have one central problem in talking about
c++: I don't use c++ in my embedded work. I also don't know of any
embedded c++ compilers and linker combinations that achieve this, in
practice. But even if they existed it wouldn't change my question
regarding c.
Tasking C++ compiler for the C166/ST10 platform (v6 r7 if I remember
correctly) does this optimization. In fact, a later version (not sure
which one) does this for C as well.
Across all modules? I'm glad to hear it happens well in some cases.
Post by R Pradeep Chandran
Also, as far as putting const declarations in header files is
concerned, there is a difference between C and C++.
By default, const has external linkage in C. By default, const has
internal linkage in C++. This is what creates the separate definitions
that you mentioned.
I think my point regarding c remains.

Thanks for the discussion,
Jon
t***@gmail.com
2007-04-10 03:44:58 UTC
Permalink
Thank you all,

for your suggestions.. I'm certain to stumble ahead, but know where to
start off now...

thanks again

krish.
Wilco Dijkstra
2007-04-10 13:23:56 UTC
Permalink
Post by Jonathan Kirwan
Post by R Pradeep Chandran
Post by Jonathan Kirwan
A possibly good way to handle this without changing the c standard (at
least, not changing the behavior of resulting code) would be to follow
how a smart c++ compiler and its linker might interpret Stan's
const int abc= 0x123;
In c, this is not only a declaration but also a definition. It
creates storage for 'abc' and initializes it -- in other words, there
exists an address where the value 0x123 is stored, at run-time. If
you placed that into a common include file, used by a variety of c
source files, you'd be in a peck of trouble as each compilation unit
would provide separate definitions. If placed into a single c file
for compilation, storage would be allocated when perhaps none would be
required.
An aggressive compiler can optimize this under many conditions. As far
as I can see, the ISO standard for C does not require that storage be
allocated.
I disagree. But we'd need to dig into the details to be sure and I
suspect that the compiler vendors would be able to nail that question
far more quickly and decisively than either of us. So if they comment
on this, I'll take their word about it either way.
In C you'd have to emit the definition always, in C++ only if the address
was taken. In both cases you can aggressively inline the constant and
the definition will not end up in the executable as the linker will remove it.

To get back to the OP's question, what he wants to do is perfectly and
efficiently supported in standard C like this:

typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004

This means there is little reason to add non-standard extensions.
The only gotcha that remains is ensuring the linker won't use this
memory for something else, this can be done with linker scripts.

Wilco
R Pradeep Chandran
2007-04-10 15:24:59 UTC
Permalink
On Tue, 10 Apr 2007 13:23:56 GMT, "Wilco Dijkstra"
<***@ntlworld.com> wrote:

<snip discussion about const in C & C++ and placing a variable at a
specified memory location>
Post by Wilco Dijkstra
To get back to the OP's question, what he wants to do is perfectly and
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
Not in standard C. Because this conversion is implementation defined.
Besides, this construct gives a wrong picture. Here storage is not
allocated dynamically. There is also no transfer of storage [1] from
another scope. Under these circumstances, using pointer notation is
misleading.
Post by Wilco Dijkstra
This means there is little reason to add non-standard extensions.
A clean way of avoiding these pointer hacks would be to use a compiler
extension.
Post by Wilco Dijkstra
The only gotcha that remains is ensuring the linker won't use this
memory for something else, this can be done with linker scripts.
There is that too. :-)

Have a nice day,
Pradeep

[1] I am not sure about the term. What I mean is that the variable is
declared in a scope that is visible to the code that operates on it.
Hence there is no reason to use a pointer to the variable.
--
All opinions are mine and do not represent the views or
policies of my employer.
R Pradeep Chandran rpc AT pobox DOT com
Wilco Dijkstra
2007-04-10 16:51:52 UTC
Permalink
Post by R Pradeep Chandran
On Tue, 10 Apr 2007 13:23:56 GMT, "Wilco Dijkstra"
<snip discussion about const in C & C++ and placing a variable at a
specified memory location>
Post by Wilco Dijkstra
To get back to the OP's question, what he wants to do is perfectly and
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
Not in standard C. Because this conversion is implementation defined.
Most compilers allow this, and don't even give a warning in their strictest
mode. Compilers that don't do the obvious here are considered defective
by their (soon to be ex) customers.
Post by R Pradeep Chandran
Besides, this construct gives a wrong picture. Here storage is not
allocated dynamically. There is also no transfer of storage [1] from
another scope. Under these circumstances, using pointer notation is
misleading.
Pointers are useful in many ways, and definitely not just for dynamically
allocated memory. For example pointers are the most obvious way to
access peripherals.

However if you hate pointers there is nothing that stops you from writing:

#define p (*ptr)
p.y = 1;
Post by R Pradeep Chandran
Post by Wilco Dijkstra
This means there is little reason to add non-standard extensions.
A clean way of avoiding these pointer hacks would be to use a compiler
extension.
Unlike the pointer method, that doesn't work on most compilers, and the
semantics of explicit variable placement vary significantly across compilers.
It would be good if it was standardized at some point, but one can hope...

Wilco
David Brown
2007-04-11 11:17:59 UTC
Permalink
Post by Wilco Dijkstra
Post by R Pradeep Chandran
On Tue, 10 Apr 2007 13:23:56 GMT, "Wilco Dijkstra"
<snip discussion about const in C & C++ and placing a variable at a
specified memory location>
Post by Wilco Dijkstra
To get back to the OP's question, what he wants to do is perfectly and
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
Not in standard C. Because this conversion is implementation defined.
Most compilers allow this, and don't even give a warning in their strictest
mode. Compilers that don't do the obvious here are considered defective
by their (soon to be ex) customers.
I too believe that this is correct standard C (perhaps there are issues
if pointers and ints are not the same size?). However, the ordering of
the fields within the struct, along with their sizes and alignments, is
implementation dependant. A compiler may or may not add padding between
fields, and can in fact re-order fields (for example, to make a more
compact struct). Whether that is a problem or not depends on the
requirements.
Wilco Dijkstra
2007-04-11 11:56:07 UTC
Permalink
Post by Wilco Dijkstra
Post by R Pradeep Chandran
On Tue, 10 Apr 2007 13:23:56 GMT, "Wilco Dijkstra"
<snip discussion about const in C & C++ and placing a variable at a
specified memory location>
Post by Wilco Dijkstra
To get back to the OP's question, what he wants to do is perfectly and
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
Not in standard C. Because this conversion is implementation defined.
Most compilers allow this, and don't even give a warning in their strictest
mode. Compilers that don't do the obvious here are considered defective
by their (soon to be ex) customers.
I too believe that this is correct standard C (perhaps there are issues if pointers and
ints are not the same size?).
Sure, you would need to be careful if you want your program to be
portable, eg. use a long (or in C99 intptr_t), but it works fine.
However, the ordering of the fields within the struct, along with their sizes and
alignments, is implementation dependant. A compiler may or may not add padding between
fields, and can in fact re-order fields (for example, to make a more compact struct).
Whether that is a problem or not depends on the requirements.
Yes, this stuff is implementation dependent, but there is only one
reasonable option in most cases (ie. align to natural alignment).
As a result, structure layout is almost always the same if integer sizes
are. Bitfields are generally fine too (except for enum bitfields -
even the latest VC++ still implements them totally incorrectly).

The same argument applies to other issues like signed rightshift or
2-complements arithmetic. It is completely standard to assume both of
these, and all compilers do the right thing. I often told language lawyers
who wanted to add features that break these de-facto assumptions that
they must personally deal with all the support queries from customers
whose programs suddenly stop working as a result... That worked :-)

Wilco
David Brown
2007-04-11 12:45:34 UTC
Permalink
Post by Wilco Dijkstra
Post by Wilco Dijkstra
Post by R Pradeep Chandran
On Tue, 10 Apr 2007 13:23:56 GMT, "Wilco Dijkstra"
<snip discussion about const in C & C++ and placing a variable at a
specified memory location>
Post by Wilco Dijkstra
To get back to the OP's question, what he wants to do is perfectly and
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
Not in standard C. Because this conversion is implementation defined.
Most compilers allow this, and don't even give a warning in their strictest
mode. Compilers that don't do the obvious here are considered defective
by their (soon to be ex) customers.
I too believe that this is correct standard C (perhaps there are issues if pointers and
ints are not the same size?).
Sure, you would need to be careful if you want your program to be
portable, eg. use a long (or in C99 intptr_t), but it works fine.
However, the ordering of the fields within the struct, along with their sizes and
alignments, is implementation dependant. A compiler may or may not add padding between
fields, and can in fact re-order fields (for example, to make a more compact struct).
Whether that is a problem or not depends on the requirements.
Yes, this stuff is implementation dependent, but there is only one
reasonable option in most cases (ie. align to natural alignment).
As a result, structure layout is almost always the same if integer sizes
are. Bitfields are generally fine too (except for enum bitfields -
even the latest VC++ still implements them totally incorrectly).
That's not true if you are looking for portability in the embedded world
- you can't assume that fields will be given their natural alignment. A
compiler for a 16-bit micro will probably align 32-bit data on 16-bit
boundaries, and an 8-bit micro will align everything on 8-bit. So you
have to be careful there.

Bitfields are even worse. Some compilers will allocate bits from the
most significant bit, although most start at the least significant bits.
Standard C will, I believe, allocate a whole int for the bitfield,
while some compilers (like gcc) allow bitfield types to be smaller than
an int, and allocate only as much space as is needed for that type
(i.e., a filed declared "short int x : 2" will be placed in a "short"
field, not an "int" field). VC++ is a particularly good example of the
non-portability of bitfields - the ordering of bitfields was changed
from MSB to LSB between versions of VC++. That's not to say bitfields
are bad - you just have to be aware of the issues.
Post by Wilco Dijkstra
The same argument applies to other issues like signed rightshift or
2-complements arithmetic. It is completely standard to assume both of
these, and all compilers do the right thing. I often told language lawyers
who wanted to add features that break these de-facto assumptions that
they must personally deal with all the support queries from customers
whose programs suddenly stop working as a result... That worked :-)
Wilco
Wilco Dijkstra
2007-04-11 14:12:53 UTC
Permalink
Post by Wilco Dijkstra
Post by Wilco Dijkstra
Post by R Pradeep Chandran
On Tue, 10 Apr 2007 13:23:56 GMT, "Wilco Dijkstra"
<snip discussion about const in C & C++ and placing a variable at a
specified memory location>
Post by Wilco Dijkstra
To get back to the OP's question, what he wants to do is perfectly and
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
Not in standard C. Because this conversion is implementation defined.
Most compilers allow this, and don't even give a warning in their strictest
mode. Compilers that don't do the obvious here are considered defective
by their (soon to be ex) customers.
I too believe that this is correct standard C (perhaps there are issues if pointers
and ints are not the same size?).
Sure, you would need to be careful if you want your program to be
portable, eg. use a long (or in C99 intptr_t), but it works fine.
However, the ordering of the fields within the struct, along with their sizes and
alignments, is implementation dependant. A compiler may or may not add padding
between fields, and can in fact re-order fields (for example, to make a more compact
struct). Whether that is a problem or not depends on the requirements.
Yes, this stuff is implementation dependent, but there is only one
reasonable option in most cases (ie. align to natural alignment).
As a result, structure layout is almost always the same if integer sizes
are. Bitfields are generally fine too (except for enum bitfields -
even the latest VC++ still implements them totally incorrectly).
That's not true if you are looking for portability in the embedded world - you can't
assume that fields will be given their natural alignment. A compiler for a 16-bit micro
will probably align 32-bit data on 16-bit boundaries, and an 8-bit micro will align
everything on 8-bit. So you have to be careful there.
True, alignment is often different between 8, 16 and 32-bit systems,
but so are integer sizes. Within architectures of the same bitsize there is
far less variation. However it is possible to overalign fields if necessary
or avoid relying on alignment completely (by allocating fields from largest
to smallest).
Bitfields are even worse. Some compilers will allocate bits from the most significant
bit, although most start at the least significant bits.
Allocating bitfields the wrong way around is a bad idea. It becomes
particularly challenging if you have multiple container sizes and also
mix with non-bitfields... I worked on a compiler that claimed to support
them, but it only worked in the most basic cases, so I removed it in
order to stop the significant support burden it produced.
Standard C will, I believe, allocate a whole int for the bitfield,
Only old C standards do (C89 and earlier), and even then it was
implementation defined.
while some compilers (like gcc) allow bitfield types to be smaller than an int, and
allocate only as much space as is needed for that type (i.e., a filed declared "short
int x : 2" will be placed in a "short" field, not an "int" field).
Correct. This has been the defacto standard on most embedded
compilers for a long time, and it finally made it in C99 and C++.
VC++ is a particularly good example of the non-portability of bitfields - the ordering of
bitfields was changed from MSB to LSB between versions of VC++. That's not to say
bitfields are bad - you just have to be aware of the issues.
Yes, even the latest C standard doesn't specify bitfields properly, so it
is a small wonder bitfields can be portable at all... The ARM EABI gives
an exact specification so that compilers behave identically (this includes
a mathematical definition of bitfield allocation I invented, and solves
issues like zero-sized and oversized bitfields, signedness and enums).
One can hope the C/C++ standards will catch up one good day...

Wilco
Hans-Bernhard Bröker
2007-04-11 23:17:46 UTC
Permalink
Post by Wilco Dijkstra
Bitfields are even worse. Some compilers will allocate bits from the most significant
bit, although most start at the least significant bits.
Allocating bitfields the wrong way around is a bad idea.
Actually no. It cannot be a bad idea for the brutally simple reason
that there is no "wrong way round". There's no "right way round"
either. What there is are two major possibilities, and a whole realm of
more-or-less perverse mixes between them. They're all different, but
none of them is "wrong".
Wilco Dijkstra
2007-04-12 18:02:00 UTC
Permalink
Post by Wilco Dijkstra
Bitfields are even worse. Some compilers will allocate bits from the most significant
bit, although most start at the least significant bits.
Allocating bitfields the wrong way around is a bad idea.
Actually no. It cannot be a bad idea for the brutally simple reason that there is no
"wrong way round". There's no "right way round" either. What there is are two major
possibilities, and a whole realm of more-or-less perverse mixes between them. They're
all different, but none of them is "wrong".
If you just think about it for more than a second you would realise that
if you allocate bitfields the wrong way around you end up with bitfield
containers not only being filled from both the least and most significant
bits but also from the middle. For example consider the allocation of:

int x :1
char y
short z :1
int a : 1
short b : 1
char c : 7

Assuming 32-bit ints and little endian, x will be in bit 31 of the int
container, y in bits 0-7, z in bit 15, a in bit 30, b in bit 14 and c
in bits 17-23. Now please try to explain this to someone who
thinks that assigning bitfields the wrong way around just means
the order of bitfields is reversed. It doesn't conform to what most
people expect, and thus is wrong.

Wilco
Hans-Bernhard Bröker
2007-04-12 22:14:51 UTC
Permalink
Post by Wilco Dijkstra
int x :1
char y
short z :1
int a : 1
short b : 1
char c : 7
Assuming 32-bit ints and little endian, x will be in bit 31 of the
int container, y in bits 0-7, z in bit 15, a in bit 30, b in bit 14
and c > in bits 17-23.
What's an "int container" supposed to be? The word "container" doesn't
appear anywhere in C99.

No matter what the "addressable storage unit" mentioned in the standard
is, there's no way such an allocation of bit positions can be
justifiable within the rules for element allocation of a struct. For
starters, a cannot possibly be in the position you name "bit 30" if z
and b are in positions 15 and 14. No matter which way you cut it,
that's a direct violation of the "shall be packed into adjacent bits of
the same unit" part of paragraph 10 of C99 6.7.2.1.
Post by Wilco Dijkstra
Now please try to explain this to someone who
thinks that assigning bitfields the wrong way around just means
the order of bitfields is reversed.
Why on earth would I want to do that? As opposed to simply telling him
to get an actual C compiler rather than try to understand the broken
behaviour of the piece of junk he's mistaking for one?
Post by Wilco Dijkstra
It doesn't conform to what most people expect, and thus is wrong.
People expect the weirdest shit. It's a mistake to try and follow such
expectations.
Wilco Dijkstra
2007-04-13 10:06:42 UTC
Permalink
Post by Wilco Dijkstra
int x :1
char y
short z :1
int a : 1
short b : 1
char c : 7
Assuming 32-bit ints and little endian, x will be in bit 31 of the
int container, y in bits 0-7, z in bit 15, a in bit 30, b in bit 14
and c > in bits 17-23.
What's an "int container" supposed to be? The word "container" doesn't appear anywhere
in C99.
Bitfield container the compiler term for "addressable storage unit" of a
bitfield. You've never worked on a compiler?
No matter what the "addressable storage unit" mentioned in the standard is, there's no
way such an allocation of bit positions can be justifiable within the rules for element
allocation of a struct. For starters, a cannot possibly be in the position you name
"bit 30" if z and b are in positions 15 and 14. No matter which way you cut it, that's
a direct violation of the "shall be packed into adjacent bits of the same unit" part of
paragraph 10 of C99 6.7.2.1.
int uses a 32-bit container, while short uses a 16-bit one, so these
are not the same storage unit and thus your rule doesn't apply.

Anyway how would you allocate these bitfields reversed?
Post by Wilco Dijkstra
Now please try to explain this to someone who
thinks that assigning bitfields the wrong way around just means
the order of bitfields is reversed.
Why on earth would I want to do that? As opposed to simply telling him to get an actual
C compiler rather than try to understand the broken behaviour of the piece of junk he's
mistaking for one?
Can you name a compiler that implements reversed bitfields properly
according to you? It would be interesting to see how they explain it.

Wilco
Hans-Bernhard Bröker
2007-04-13 22:53:18 UTC
Permalink
Post by Wilco Dijkstra
Post by Wilco Dijkstra
int x :1
char y
short z :1
int a : 1
short b : 1
char c : 7
Assuming 32-bit ints and little endian, x will be in bit 31 of the
int container, y in bits 0-7, z in bit 15, a in bit 30, b in bit 14
and c > in bits 17-23.
What's an "int container" supposed to be? The word "container" doesn't appear anywhere
in C99.
Bitfield container the compiler term for "addressable storage unit" of a
bitfield. You've never worked on a compiler?
We're talking about the language to be compiled, here, not the compilers.
Post by Wilco Dijkstra
No matter what the "addressable storage unit" mentioned in the standard is, there's no
way such an allocation of bit positions can be justifiable within the rules for element
allocation of a struct. For starters, a cannot possibly be in the position you name
"bit 30" if z and b are in positions 15 and 14. No matter which way you cut it, that's
a direct violation of the "shall be packed into adjacent bits of the same unit" part of
paragraph 10 of C99 6.7.2.1.
int uses a 32-bit container, while short uses a 16-bit one, so these
are not the same storage unit and thus your rule doesn't apply.
Except that there's no in and no short anywhere in that examples. There
are lots of 1-bit bitfields, specified with different base types. You
apparently insist on those base types having more relevance to the
allowable allocation sequences than the language definition. That's a
rather strange point-of-view, but not one I'm going to share.

And by the way, that's not "my" rule, that's what the language
definition says.
Post by Wilco Dijkstra
Anyway how would you allocate these bitfields reversed?
Assuming 32-bit "units": x in a unit of its own, as its MSBit. y goes
into another unit, because it's not a bit-field. z is the MSBit (#31)
of another unit, followed by a (bit 30), b (bit 29) and c (bits 28 to
22) --- the unit is large enough, so they have to be adjacent, which
leaves no other option once 'y' has been put in the MSBit.
Wilco Dijkstra
2007-04-14 11:35:36 UTC
Permalink
Post by Hans-Bernhard Bröker
Post by Wilco Dijkstra
Post by Wilco Dijkstra
int x :1
char y
short z :1
int a : 1
short b : 1
char c : 7
Assuming 32-bit ints and little endian, x will be in bit 31 of the
int container, y in bits 0-7, z in bit 15, a in bit 30, b in bit 14
and c > in bits 17-23.
What's an "int container" supposed to be? The word "container" doesn't appear
anywhere in C99.
Bitfield container the compiler term for "addressable storage unit" of a
bitfield. You've never worked on a compiler?
We're talking about the language to be compiled, here, not the compilers.
Bitfield container is standard terminology used to describe how to allocate
bitfields, for example in compiler documentation. This is relevant because
we're talking about implementing the language, not about the standard in a
theoretical way.
Post by Hans-Bernhard Bröker
Post by Wilco Dijkstra
No matter what the "addressable storage unit" mentioned in the standard is, there's no
way such an allocation of bit positions can be justifiable within the rules for
element allocation of a struct. For starters, a cannot possibly be in the position
you name "bit 30" if z and b are in positions 15 and 14. No matter which way you cut
it, that's a direct violation of the "shall be packed into adjacent bits of the same
unit" part of paragraph 10 of C99 6.7.2.1.
int uses a 32-bit container, while short uses a 16-bit one, so these
are not the same storage unit and thus your rule doesn't apply.
Except that there's no in and no short anywhere in that examples. There are lots of
1-bit bitfields, specified with different base types. You apparently insist on those
base types having more relevance to the allowable allocation sequences than the language
definition. That's a rather strange point-of-view, but not one I'm going to share.
Well this is how bitfield containers work. The base type of a bitfield is used
as the container type, ie. it determines the placement and alignment of
the bitfield as well as the access type. Containers may overlap, so that
bitfields are packed as tightly as possible (which is a key requirement).
Whether you agree or not, this is how compilers implement bitfields.
Post by Hans-Bernhard Bröker
And by the way, that's not "my" rule, that's what the language definition says.
Actually the language is unclear, which is why I developed a consise
mathematical definition of bitfields that both adheres to the standard
as well as meeting all expectations people have of bitfields. Effectively
all fields, whether bitfield or not, are allocated like bitfields. So there is
no difference between char x and char y:8 in terms of layout.
Post by Hans-Bernhard Bröker
Post by Wilco Dijkstra
Anyway how would you allocate these bitfields reversed?
Assuming 32-bit "units": x in a unit of its own, as its MSBit. y goes into another
unit, because it's not a bit-field. z is the MSBit (#31) of another unit, followed by a
(bit 30), b (bit 29) and c (bits 28 to 22) --- the unit is large enough, so they have to
be adjacent, which leaves no other option once 'y' has been put in the MSBit.
That's 12 bytes for the structure rather than 4 which is what people
expect (and you violate the requirement that bitfields are allocated to
a containers of the appropriate type - c straddles 2 8-bit containers -
and the requirement that containers are allocated on ascending
addresses). The non-reversed version takes 4 bytes on most compilers,
try it out on GCC for example.

The problem is that it is impossible to come up with a definition that
allows the reversed variant to be the same size as the normal structure
and not violate the standard. One way would be to allocate all fields
big-endian style (so fields are exactly mirrored), but this means not all
fields have increasing addresses.

Wilco
Grant Edwards
2007-04-11 15:35:03 UTC
Permalink
Post by David Brown
I too believe that this is correct standard C (perhaps there are issues
if pointers and ints are not the same size?). However, the ordering of
the fields within the struct, along with their sizes and alignments, is
implementation dependant.
According to K&R, the compiler may not re-order struct fields.
Quoting from §8.5:

Within a structure, the objects declared have addresses which
increase as their declarations are read left to right.

The current working-version of ISO 9899 is available at

http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf

Quoting from §6.7.2.1 Structure and union specifiers

¶13 Within a structure object, the non-bit-field members and
the units in which bit-fields reside have addresses that
increase win the order in which they are declared.
Post by David Brown
A compiler may or may not add padding between fields, and can
in fact re-order fields (for example, to make a more compact
struct).
Which C standard allows structure fields to be re-ordered by
the compiler?
Post by David Brown
Whether that is a problem or not depends on the requirements.
All compilers I've ever used provided a way to control padding
within structures. Since this is obviously platform-specific
code, I don't consider that too evil.
--
Grant Edwards grante Yow! I'm meditating on
at the FORMALDEHYDE and the
visi.com ASBESTOS leaking into my
PERSONAL SPACE!!
David Brown
2007-04-13 08:45:49 UTC
Permalink
Post by Grant Edwards
Post by David Brown
I too believe that this is correct standard C (perhaps there are issues
if pointers and ints are not the same size?). However, the ordering of
the fields within the struct, along with their sizes and alignments, is
implementation dependant.
According to K&R, the compiler may not re-order struct fields.
Within a structure, the objects declared have addresses which
increase as their declarations are read left to right.
The current working-version of ISO 9899 is available at
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf
Quoting from §6.7.2.1 Structure and union specifiers
¶13 Within a structure object, the non-bit-field members and
the units in which bit-fields reside have addresses that
increase win the order in which they are declared.
Post by David Brown
A compiler may or may not add padding between fields, and can
in fact re-order fields (for example, to make a more compact
struct).
Which C standard allows structure fields to be re-ordered by
the compiler?
I have never seen a C compiler which *did* reorder struct fields, but I
thought it was allowed to do so. I guess I was wrong there - thanks for
the correction.
Post by Grant Edwards
Post by David Brown
Whether that is a problem or not depends on the requirements.
All compilers I've ever used provided a way to control padding
within structures. Since this is obviously platform-specific
code, I don't consider that too evil.
Yes, most have some sort of "packed" pragma or attribute. I generally
prefer to do padding explicitly (with dummy fields), with a compiler
warning if it adds padding itself. As you say, padding, alignment and
bit ordering are not a problem when internal to a program - you just
have to be careful when the data is being swapped with other programs or
targets, or when it is mapped to hardware.
Wilco Dijkstra
2007-04-13 09:50:19 UTC
Permalink
Post by Grant Edwards
Quoting from §6.7.2.1 Structure and union specifiers
¶13 Within a structure object, the non-bit-field members and
the units in which bit-fields reside have addresses that
increase win the order in which they are declared.
Post by David Brown
A compiler may or may not add padding between fields, and can
in fact re-order fields (for example, to make a more compact
struct).
Which C standard allows structure fields to be re-ordered by
the compiler?
I have never seen a C compiler which *did* reorder struct fields, but I thought it was
allowed to do so. I guess I was wrong there - thanks for the correction.
Compilers can reorder fields - if you cannot notice. Since structures are
used across multiple sources, it is difficult to prove that you didn't take
the address of the fields. However several compilers can change an array
of structs into a struct of arrays to improve cache locality, Sun did this on
some SPEC code.

Wilco
Grant Edwards
2007-04-13 14:22:48 UTC
Permalink
Post by Wilco Dijkstra
Post by Grant Edwards
Quoting from §6.7.2.1 Structure and union specifiers
¶13 Within a structure object, the non-bit-field members and
the units in which bit-fields reside have addresses that
increase win the order in which they are declared.
Post by David Brown
A compiler may or may not add padding between fields, and can
in fact re-order fields (for example, to make a more compact
struct).
Which C standard allows structure fields to be re-ordered by
the compiler?
I have never seen a C compiler which *did* reorder struct fields, but I thought it was
allowed to do so. I guess I was wrong there - thanks for the correction.
Compilers can reorder fields - if you cannot notice.
Not if they want to be standards compliant.
Post by Wilco Dijkstra
Since structures are used across multiple sources, it is
difficult to prove that you didn't take the address of the
fields.
Which is irrelevent to the standard.
Post by Wilco Dijkstra
However several compilers can change an array of structs into
a struct of arrays to improve cache locality, Sun did this on
some SPEC code.
That violates the C standard. The standard says the addresses
must be monotonically increasing in the order that they were
declared. Period.

It doesn't say you can violate that requirement if the program
never uses the address.
--
Grant Edwards grante Yow! .. Now KEN and BARBIE
at are PERMANENTLY ADDICTED to
visi.com MIND-ALTERING DRUGS...
Wilco Dijkstra
2007-04-13 16:05:08 UTC
Permalink
Post by Grant Edwards
Post by Wilco Dijkstra
Post by Grant Edwards
Quoting from §6.7.2.1 Structure and union specifiers
¶13 Within a structure object, the non-bit-field members and
the units in which bit-fields reside have addresses that
increase win the order in which they are declared.
Post by David Brown
A compiler may or may not add padding between fields, and can
in fact re-order fields (for example, to make a more compact
struct).
Which C standard allows structure fields to be re-ordered by
the compiler?
I have never seen a C compiler which *did* reorder struct fields, but I thought it was
allowed to do so. I guess I was wrong there - thanks for the correction.
Compilers can reorder fields - if you cannot notice.
Not if they want to be standards compliant.
Standards compliance only matters for things you can notice.
Have you ever used a modern compiler? Many modern compilers
split structures into individual fields, allocate some of the fields to
registers and remove the unused fields. A field in a register doesn't
have an address, let alone a monotonically increasing one...

Are you seriously claiming that these compilers are not compliant?
Post by Grant Edwards
Post by Wilco Dijkstra
However several compilers can change an array of structs into
a struct of arrays to improve cache locality, Sun did this on
some SPEC code.
That violates the C standard. The standard says the addresses
must be monotonically increasing in the order that they were
declared. Period.
It's good compiler writers are pragmatic, not pedantic...
Post by Grant Edwards
It doesn't say you can violate that requirement if the program
never uses the address.
If a program that violates the requirement behaves identically to a
program that doesn't violate it then it is considered conforming by
the C standard (the standard has explicit wording to this effect).

In any case, if you can't tell which one is not conforming then how
can it possibly matter? Would you be happy to randomly select
one and use it, eventhough it might be not conformant? Or would
it make you feel bad somehow? It's a philosophical issue at best.

Wilco
Rob Windgassen
2007-04-13 21:05:05 UTC
Permalink
Post by Grant Edwards
Post by Wilco Dijkstra
Compilers can reorder fields - if you cannot notice.
Not if they want to be standards compliant.
Post by Wilco Dijkstra
Since structures are used across multiple sources, it is
difficult to prove that you didn't take the address of the
fields.
Which is irrelevent to the standard.
It is relevant. The standard describes an abstract machine to define the
semantics (section 5.1.2.3 of C99). An implementation (compiler) is free
to optimize as long as the visible behaviour is not changed:

<q>
5.1.2.3 Program execution
1 The semantic descriptions in this International Standard
describe the behavior of an
abstract machine in which issues of optimization are irrelevant.

....

5 The least requirements on a conforming implementation are:
- At sequence points, volatile objects are stable in the sense
that previous accesses are
complete and subsequent accesses have not yet occurred.
- At program termination, all data written into files shall be identical
to the result that execution of the program according to the abstract
semantics would have produced.
- The input and output dynamics of interactive devices shall take place
as specified in 7.19.3. The intent of these requirements is that
unbuffered or line-buffered output appear as soon as possible, to ensure
that prompting messages actually appear prior to a program waiting for
input.
</q>

Rob
Grant Edwards
2007-04-13 21:13:13 UTC
Permalink
Post by Rob Windgassen
Post by Grant Edwards
Post by Wilco Dijkstra
Since structures are used across multiple sources, it is
difficult to prove that you didn't take the address of the
fields.
Which is irrelevent to the standard.
It is relevant. The standard describes an abstract machine to define the
semantics (section 5.1.2.3 of C99). An implementation (compiler) is free
I must admit I was wrong. As long as it can be shown that the
reordering of fields is not visible, the compiler is allowed to
do so. I would maintain that's virtually impossible to do in
an embedded system, where things like the location of data in
non-volatile memory is "visible" even though it was never
"written to a file".
--
Grant Edwards grante Yow! If elected, Zippy
at pledges to each and every
visi.com American a 55-year-old
houseboy ...
John Devereux
2007-04-13 22:36:33 UTC
Permalink
Post by Grant Edwards
Post by Rob Windgassen
Post by Grant Edwards
Post by Wilco Dijkstra
Since structures are used across multiple sources, it is
difficult to prove that you didn't take the address of the
fields.
Which is irrelevent to the standard.
It is relevant. The standard describes an abstract machine to define the
semantics (section 5.1.2.3 of C99). An implementation (compiler) is free
I must admit I was wrong. As long as it can be shown that the
reordering of fields is not visible, the compiler is allowed to
do so. I would maintain that's virtually impossible to do in
an embedded system, where things like the location of data in
non-volatile memory is "visible" even though it was never
"written to a file".
But this sort of thing, which is outside of the definition of the C
language, might not "count" as visible. That's why a compiler can
optimize out a variable used as a semaphore, unless declared volatile,
since C knows nothing of threads. So I don't see why a compiler can't
re-arrange structure fields, since this could be made invisible within
the C language. (But very bad if the structure maps onto a hardware
device).
--
John Devereux
Paul Gotch
2007-04-14 00:42:43 UTC
Permalink
Post by John Devereux
re-arrange structure fields, since this could be made invisible within
the C language. (But very bad if the structure maps onto a hardware
device).
...which is why lots of hideousness exists with #pragma directives and
packed attributes.

-p
--
"Unix is user friendly, it's just picky about who its friends are."
- Anonymous
--------------------------------------------------------------------
Wilco Dijkstra
2007-04-15 12:25:43 UTC
Permalink
Post by Paul Gotch
Post by John Devereux
re-arrange structure fields, since this could be made invisible within
the C language. (But very bad if the structure maps onto a hardware
device).
...which is why lots of hideousness exists with #pragma directives and
packed attributes.
Volatile is the standard way for this sort of thing (of course like most other
language featues volatile is not very well defined in C, but compiler writers
and their users agree on what it means). It forces the compiler to perform
all reads and writes as written in the source code, including the exact
width the user specified and their order.

Wilco
Hans-Bernhard Bröker
2007-04-11 23:09:43 UTC
Permalink
Post by David Brown
Post by Wilco Dijkstra
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
I too believe that this is correct standard C
It's not. The second code line above has an implementation-defined
effect. One possible result is that "ptr" ends up holding a trap
representation. If it does (and there's *no* standard C way of being
sure it doesn't), then the third line, dereferencing a pointer holding a
trap representation, causes undefined behaviour. Code that causes
undefined behaviour can not possibly be called "correct" by any useful
interpretation of that term.

For those who insist on examples instead of standard-ese: there's
nothing to stop a platform from simply not having any addressable memory
address (void *)0x8000. And there sure as manure is no requirement for
this memory, if it exists, to be _writable_.

The above code is about as far from being "correct standard C" as any
piece of code can be, while still passing through a typical C compiler.
Post by David Brown
A compiler may or may not add padding between fields,
Correct
Post by David Brown
and can in fact re-order fields (for example, to make a more compact
struct).
A lot of people think that, but they're wrong. The Standard doesn't say
so out loud but no, a C compiler is not allowed to change the order of
fields inside a structure from that found in the source.
Grant Edwards
2007-04-11 23:30:57 UTC
Permalink
Post by Hans-Bernhard Bröker
and can in fact re-order fields (for example, to make a more compact
struct).
A lot of people think that, but they're wrong. The Standard
doesn't say so out loud but no, a C compiler is not allowed to
change the order of fields inside a structure from that found
in the source.
Actually it does say "out loud" that compilers must not change
the order of fields inside a structure. See my previous
posting for the specific paragraphs (from both K&R and
ISO-9899).

Unless that's not considered "out loud" -- though I don't see
how that language could mean anything else...
--
Grant Edwards grante Yow! I just remembered
at something about a TOAD!
visi.com
Hans-Bernhard Bröker
2007-04-12 00:22:49 UTC
Permalink
Post by Grant Edwards
Post by Hans-Bernhard Bröker
A lot of people think that, but they're wrong. The Standard
doesn't say so out loud but no, a C compiler is not allowed to
change the order of fields inside a structure from that found
in the source.
Actually it does say "out loud" that compilers must not change
the order of fields inside a structure. See my previous
posting for the specific paragraphs (from both K&R and
ISO-9899).
One problem being that neither of those truly is "the" Standard the
majority of embedded work has to rely on, which still is C-90. K&R
stopped being the standard when the first actual standard arrived, and
C99 is often still a castle in the sky.

The wording about increasing order was only added in C99 unless I
misremember worse than usual. There used to be only the statement about
pointer casts between a struct and an extended version of itself with
additional elements added only at the end, being allowed. The
sequential-ordering requirement follows from that, but this fact was
hidden a bit too well.
Grant Edwards
2007-04-12 03:33:26 UTC
Permalink
Post by Hans-Bernhard Bröker
Post by Grant Edwards
Post by Hans-Bernhard Bröker
A lot of people think that, but they're wrong. The Standard
doesn't say so out loud but no, a C compiler is not allowed to
change the order of fields inside a structure from that found
in the source.
Actually it does say "out loud" that compilers must not change
the order of fields inside a structure. See my previous
posting for the specific paragraphs (from both K&R and
ISO-9899).
One problem being that neither of those truly is "the"
Standard the majority of embedded work has to rely on, which
still is C-90. K&R stopped being the standard when the first
actual standard arrived, and C99 is often still a castle in
the sky.
The wording about increasing order was only added in C99 unless I
misremember worse than usual.
Ah, I was wondering about that. I don't currently have access
to a copy of the C90 spec. I assumed that something which was
stately virtually identicallyin K&R and C99 would also be in
C90.
Post by Hans-Bernhard Bröker
There used to be only the statement about pointer casts
between a struct and an extended version of itself with
additional elements added only at the end, being allowed. The
sequential-ordering requirement follows from that, but this
fact was hidden a bit too well.
I find it very odd that they would take what was a clearly
worded requirement in K&R, remove it, then put it back. On
second thought, when I recall my experiences with standards
processes, I guess it's not all that odd...
--
Grant Edwards grante Yow! A dwarf is passing
at out somewhere in Detroit!
visi.com
John Temples
2007-04-12 16:36:05 UTC
Permalink
Post by Hans-Bernhard Bröker
Post by Grant Edwards
Post by Hans-Bernhard Bröker
A lot of people think that, but they're wrong. The Standard
doesn't say so out loud but no, a C compiler is not allowed to
change the order of fields inside a structure from that found
in the source.
Actually it does say "out loud" that compilers must not change
the order of fields inside a structure. See my previous
posting for the specific paragraphs (from both K&R and
ISO-9899).
The wording about increasing order was only added in C99 unless I
misremember worse than usual. There used to be only the statement about
pointer casts between a struct and an extended version of itself with
additional elements added only at the end, being allowed. The
sequential-ordering requirement follows from that, but this fact was
hidden a bit too well.
In 6.5.2.1, C90 says, "Within a structure object, the non-bit-field
members and the units in which bit-fields reside have addresses that
increase in the order in which they are declared." This is the
identical wording of C99, 6.7.2.1.
--
John W. Temples, III
Grant Edwards
2007-04-12 17:02:12 UTC
Permalink
Post by John Temples
Post by Hans-Bernhard Bröker
Post by Grant Edwards
Post by Hans-Bernhard Bröker
A lot of people think that, but they're wrong. The Standard
doesn't say so out loud but no, a C compiler is not allowed to
change the order of fields inside a structure from that found
in the source.
Actually it does say "out loud" that compilers must not change
the order of fields inside a structure. See my previous
posting for the specific paragraphs (from both K&R and
ISO-9899).
The wording about increasing order was only added in C99 unless I
misremember worse than usual. There used to be only the statement about
pointer casts between a struct and an extended version of itself with
additional elements added only at the end, being allowed. The
sequential-ordering requirement follows from that, but this fact was
hidden a bit too well.
In 6.5.2.1, C90 says, "Within a structure object, the non-bit-field
members and the units in which bit-fields reside have addresses that
increase in the order in which they are declared." This is the
identical wording of C99, 6.7.2.1.
I had found a refence somewhere to 6.5.2.1 in regards to this
topic, but the poster didn't say what spec he was citing. Looks
like it was C90.
--
Grant Edwards grante Yow! SHHHH!! I hear SIX
at TATTOOED TRUCK-DRIVERS
visi.com tossing ENGINE BLOCKS into
empty OIL DRUMS...
David Brown
2007-04-13 08:56:00 UTC
Permalink
Post by Hans-Bernhard Bröker
Post by David Brown
Post by Wilco Dijkstra
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
I too believe that this is correct standard C
It's not. The second code line above has an implementation-defined
effect. One possible result is that "ptr" ends up holding a trap
representation. If it does (and there's *no* standard C way of being
sure it doesn't), then the third line, dereferencing a pointer holding a
trap representation, causes undefined behaviour. Code that causes
undefined behaviour can not possibly be called "correct" by any useful
interpretation of that term.
For those who insist on examples instead of standard-ese: there's
nothing to stop a platform from simply not having any addressable memory
address (void *)0x8000. And there sure as manure is no requirement for
this memory, if it exists, to be _writable_.
The above code is about as far from being "correct standard C" as any
piece of code can be, while still passing through a typical C compiler.
I think the code is "correct", under certain reasonable assumptions -
that the memory exists, is addressable, is writeable in this way, and
that the compiler will make the same assumptions.

By the same logic, I would say the following is "correct" C:

extern void bar(char* p);
void foo(void) {
char buffer[1000];
bar(&buffer[999]);
}

There is absolutely nothing in the C standards that guarantees that the
platform can allocate 1000 bytes on the stack (or simulated stack) - yet
it is still considered correct C because we assume the resources are
available.
Post by Hans-Bernhard Bröker
Post by David Brown
A compiler may or may not add padding between fields,
Correct
Post by David Brown
and can in fact re-order fields (for example, to make a more compact
struct).
A lot of people think that, but they're wrong. The Standard doesn't say
so out loud but no, a C compiler is not allowed to change the order of
fields inside a structure from that found in the source.
Thanks for that correction.
Wilco Dijkstra
2007-04-13 09:45:52 UTC
Permalink
Post by David Brown
Post by Wilco Dijkstra
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
I too believe that this is correct standard C
It's not. The second code line above has an implementation-defined effect. One
possible result is that "ptr" ends up holding a trap representation. If it does (and
there's *no* standard C way of being sure it doesn't), then the third line,
dereferencing a pointer holding a trap representation, causes undefined behaviour.
Code that causes undefined behaviour can not possibly be called "correct" by any useful
interpretation of that term.
For those who insist on examples instead of standard-ese: there's nothing to stop a
platform from simply not having any addressable memory address (void *)0x8000. And
there sure as manure is no requirement for this memory, if it exists, to be _writable_.
The above code is about as far from being "correct standard C" as any piece of code can
be, while still passing through a typical C compiler.
If you can name good technical reasons why it can never work then you
would have a point. Just name a few compilers or CPUs where it never
works. Arguing that it is not correct when it obviously works for everybody
and is used in billions of devices is never going to win any arguments.
The C standard is usually 10+ years behind common practise.
I think the code is "correct", under certain reasonable assumptions - that the memory
exists, is addressable, is writeable in this way, and that the compiler will make the
same assumptions.
You're absolutely right. It does not even matter whether it is unspecified,
implementation defined or even undefined behaviour. What matters is
that it works. It's a normal expectation to cast between integers and
pointers (both are just sets of bits afterall), and it works perfectly fine
on all compilers. In the embedded space peripherals are usually
accessed by casting integers to pointers.
extern void bar(char* p);
void foo(void) {
char buffer[1000];
bar(&buffer[999]);
}
There is absolutely nothing in the C standards that guarantees that the platform can
allocate 1000 bytes on the stack (or simulated stack) - yet it is still considered
correct C because we assume the resources are available.
Indeed. There are lots of other examples like this.

Wilco
Hans-Bernhard Bröker
2007-04-13 22:59:41 UTC
Permalink
[...]
Post by David Brown
Post by Hans-Bernhard Bröker
For those who insist on examples instead of standard-ese: there's
nothing to stop a platform from simply not having any addressable
memory address (void *)0x8000. And there sure as manure is no
requirement for this memory, if it exists, to be _writable_.
The above code is about as far from being "correct standard C" as any
piece of code can be, while still passing through a typical C compiler.
I think the code is "correct", under certain reasonable assumptions -
That's a delusion. The code is incorrect, precisely *because* you have
to make external assumptions to avoid it causing undefined behaviour.
Post by David Brown
that the memory exists, is addressable, is writeable in this way, and
that the compiler will make the same assumptions.
None of those assumptions are part of the code sample in question.
Post by David Brown
extern void bar(char* p);
void foo(void) {
char buffer[1000];
bar(&buffer[999]);
}
There is absolutely nothing in the C standards that guarantees that the
platform can allocate 1000 bytes on the stack (or simulated stack)
Actually, to some extent there is: C99 5.2.4.1: the 65535-byte object
requirement.
John Devereux
2007-04-14 11:58:30 UTC
Permalink
Post by Hans-Bernhard Bröker
[...]
Post by David Brown
Post by Hans-Bernhard Bröker
For those who insist on examples instead of standard-ese: there's
nothing to stop a platform from simply not having any addressable
memory address (void *)0x8000. And there sure as manure is no
requirement for this memory, if it exists, to be _writable_.
The above code is about as far from being "correct standard C" as
any piece of code can be, while still passing through a typical C
compiler.
I think the code is "correct", under certain reasonable assumptions -
That's a delusion. The code is incorrect, precisely *because* you
have to make external assumptions to avoid it causing undefined
behaviour.
But this in in the context of an embedded system which does have
hardware or memory which can be accessed at that address - that is the
whole point of what the OP is trying to do. Of course this code won't
work on a different hardware platform, but I don't think this makes
the code "incorrect" in the context of embedded systems.

Otherwise any kind of hardware driver written in C is "incorrect",
because it inherently has to make assumptions about the hardware
behaviour!
--
John Devereux
David Brown
2007-04-16 07:10:47 UTC
Permalink
Post by Hans-Bernhard Bröker
[...]
Post by David Brown
Post by Hans-Bernhard Bröker
For those who insist on examples instead of standard-ese: there's
nothing to stop a platform from simply not having any addressable
memory address (void *)0x8000. And there sure as manure is no
requirement for this memory, if it exists, to be _writable_.
The above code is about as far from being "correct standard C" as any
piece of code can be, while still passing through a typical C compiler.
I think the code is "correct", under certain reasonable assumptions -
That's a delusion. The code is incorrect, precisely *because* you have
to make external assumptions to avoid it causing undefined behaviour.
*Any* program exists in the context of its run-time environment, and
programs make all sorts of assumptions about that environment. These
assumptions are not in the code, because there is no way to write them
in the code. Even if were possible, in this case, to tell the compiler
at the memory at 0x8000 works as you want, then there would still be
other assumptions.

At what point do you start to accept these external assumptions? Do you
accept that the memory interfaces on the card work (there may be
hardware errors, or occasional ram bit errors)? Do you accept that the
processor works (they've been known to have bugs)? Do you accept that
integer arithmetic "works" at a mathematical level (you can't prove it)?

Back here in the real world, it's fair to assume that the memory at
0x8000 works as expected. It's possible that the code will contain
comments to that effect, but it might be implied by the platform in use.
It should of course be clear what is going on - that's part of writing
good code.
Post by Hans-Bernhard Bröker
Post by David Brown
that the memory exists, is addressable, is writeable in this way, and
that the compiler will make the same assumptions.
None of those assumptions are part of the code sample in question.
Post by David Brown
extern void bar(char* p);
void foo(void) {
char buffer[1000];
bar(&buffer[999]);
}
There is absolutely nothing in the C standards that guarantees that
the platform can allocate 1000 bytes on the stack (or simulated stack)
Actually, to some extent there is: C99 5.2.4.1: the 65535-byte object
requirement.
Are you happy then that "foo" is correct code, while foo2 below is not?
And incidentally, does a C99 compliant compiler suddenly become
non-compliant if the linker script does not provide 64k of stack space?

void foo2(void) {
char buffer[10000];
bar(&buffer[999]);
}


mvh.,

David

Jonathan Kirwan
2007-04-10 20:14:12 UTC
Permalink
On Tue, 10 Apr 2007 13:23:56 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
Post by Jonathan Kirwan
Post by R Pradeep Chandran
Post by Jonathan Kirwan
A possibly good way to handle this without changing the c standard (at
least, not changing the behavior of resulting code) would be to follow
how a smart c++ compiler and its linker might interpret Stan's
const int abc= 0x123;
In c, this is not only a declaration but also a definition. It
creates storage for 'abc' and initializes it -- in other words, there
exists an address where the value 0x123 is stored, at run-time. If
you placed that into a common include file, used by a variety of c
source files, you'd be in a peck of trouble as each compilation unit
would provide separate definitions. If placed into a single c file
for compilation, storage would be allocated when perhaps none would be
required.
An aggressive compiler can optimize this under many conditions. As far
as I can see, the ISO standard for C does not require that storage be
allocated.
I disagree. But we'd need to dig into the details to be sure and I
suspect that the compiler vendors would be able to nail that question
far more quickly and decisively than either of us. So if they comment
on this, I'll take their word about it either way.
In C you'd have to emit the definition always, in C++ only if the address
was taken. In both cases you can aggressively inline the constant and
the definition will not end up in the executable as the linker will remove it.
To get back to the OP's question, what he wants to do is perfectly and
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
This means there is little reason to add non-standard extensions.
The only gotcha that remains is ensuring the linker won't use this
memory for something else, this can be done with linker scripts.
I can't agree. Your example requires storage for the pointer. In the
case I'm talking about, there is no such storage required.

Jon
Wilco Dijkstra
2007-04-10 21:15:51 UTC
Permalink
Post by R Pradeep Chandran
On Tue, 10 Apr 2007 13:23:56 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
Post by Jonathan Kirwan
Post by R Pradeep Chandran
An aggressive compiler can optimize this under many conditions. As far
as I can see, the ISO standard for C does not require that storage be
allocated.
I disagree. But we'd need to dig into the details to be sure and I
suspect that the compiler vendors would be able to nail that question
far more quickly and decisively than either of us. So if they comment
on this, I'll take their word about it either way.
In C you'd have to emit the definition always, in C++ only if the address
was taken. In both cases you can aggressively inline the constant and
the definition will not end up in the executable as the linker will remove it.
To get back to the OP's question, what he wants to do is perfectly and
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
This means there is little reason to add non-standard extensions.
The only gotcha that remains is ensuring the linker won't use this
memory for something else, this can be done with linker scripts.
I can't agree. Your example requires storage for the pointer. In the
case I'm talking about, there is no such storage required.
I was trying to explain above why this wasn't required - obviously
I didn't do a good job! The definition can be optimised away as
nothing refers to it. The ARM compiler generates this for the above
statement:

MOV r0,#1
MOV r1,#0x8000
STR r0,[r1,#4]

On CPUs that have absolute addressing this would be even more
efficient. Of course you need to recompile if you change the address.

Wilco
Jonathan Kirwan
2007-04-10 21:26:07 UTC
Permalink
On Tue, 10 Apr 2007 21:15:51 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
Post by R Pradeep Chandran
On Tue, 10 Apr 2007 13:23:56 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
Post by Jonathan Kirwan
Post by R Pradeep Chandran
An aggressive compiler can optimize this under many conditions. As far
as I can see, the ISO standard for C does not require that storage be
allocated.
I disagree. But we'd need to dig into the details to be sure and I
suspect that the compiler vendors would be able to nail that question
far more quickly and decisively than either of us. So if they comment
on this, I'll take their word about it either way.
In C you'd have to emit the definition always, in C++ only if the address
was taken. In both cases you can aggressively inline the constant and
the definition will not end up in the executable as the linker will remove it.
To get back to the OP's question, what he wants to do is perfectly and
typedef struct { int x, y, z; } T;
T *const ptr = (T*) 0x8000;
ptr->y = 1; // write to 0x8004
This means there is little reason to add non-standard extensions.
The only gotcha that remains is ensuring the linker won't use this
memory for something else, this can be done with linker scripts.
I can't agree. Your example requires storage for the pointer. In the
case I'm talking about, there is no such storage required.
I was trying to explain above why this wasn't required - obviously
I didn't do a good job! The definition can be optimised away as
nothing refers to it. The ARM compiler generates this for the above
MOV r0,#1
MOV r1,#0x8000
STR r0,[r1,#4]
On CPUs that have absolute addressing this would be even more
efficient. Of course you need to recompile if you change the address.
Thanks for the counterpoint. That may be true on some c++ compiler,
but it would NOT be true for c compilers, which is what this is about
(at least, to me.) And even in the c++ case, there remains a
difference. I don't need to recompile c/c++ sources when I change the
symbolic constant defined by the assembly module I use in some
projects. I just re-assemble the symbolic constant file and relink
the project.

Jon
Wilco Dijkstra
2007-04-10 21:43:19 UTC
Permalink
Post by Jonathan Kirwan
On Tue, 10 Apr 2007 21:15:51 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
I was trying to explain above why this wasn't required - obviously
I didn't do a good job! The definition can be optimised away as
nothing refers to it. The ARM compiler generates this for the above
MOV r0,#1
MOV r1,#0x8000
STR r0,[r1,#4]
On CPUs that have absolute addressing this would be even more
efficient. Of course you need to recompile if you change the address.
Thanks for the counterpoint. That may be true on some c++ compiler,
but it would NOT be true for c compilers, which is what this is about
(at least, to me.)
I still wasn't clear enough - the above code is generated by a C compiler.
There is no difference between C and C++ in this respect.
Post by Jonathan Kirwan
And even in the c++ case, there remains a
difference. I don't need to recompile c/c++ sources when I change the
symbolic constant defined by the assembly module I use in some
projects. I just re-assemble the symbolic constant file and relink
the project.
Sure, if you don't want to recompile then you need to use symbolic
variables and explicitly place them. This can sometimes be done
in the assembler or otherwise at linktime.

Wilco
Jonathan Kirwan
2007-04-10 21:58:20 UTC
Permalink
On Tue, 10 Apr 2007 21:43:19 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
Post by Jonathan Kirwan
On Tue, 10 Apr 2007 21:15:51 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
I was trying to explain above why this wasn't required - obviously
I didn't do a good job! The definition can be optimised away as
nothing refers to it. The ARM compiler generates this for the above
MOV r0,#1
MOV r1,#0x8000
STR r0,[r1,#4]
On CPUs that have absolute addressing this would be even more
efficient. Of course you need to recompile if you change the address.
Thanks for the counterpoint. That may be true on some c++ compiler,
but it would NOT be true for c compilers, which is what this is about
(at least, to me.)
I still wasn't clear enough - the above code is generated by a C compiler.
There is no difference between C and C++ in this respect.
Hmm. That may be, but wouldn't that violate the c standard? I have a
vague memory that such optimizations aren't considered strictly
correct.
Post by Wilco Dijkstra
Post by Jonathan Kirwan
And even in the c++ case, there remains a
difference. I don't need to recompile c/c++ sources when I change the
symbolic constant defined by the assembly module I use in some
projects. I just re-assemble the symbolic constant file and relink
the project.
Sure, if you don't want to recompile then you need to use symbolic
variables and explicitly place them. This can sometimes be done
in the assembler or otherwise at linktime.
Jon
Wilco Dijkstra
2007-04-11 00:17:56 UTC
Permalink
Post by Jonathan Kirwan
On Tue, 10 Apr 2007 21:43:19 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
There is no difference between C and C++ in this respect.
Hmm. That may be, but wouldn't that violate the c standard? I have a
vague memory that such optimizations aren't considered strictly
correct.
The C standard defines a virtual machine which allows any optimizations
as long as the observeable side effects (eg. output) remain the same.

How could you possibly observe the effect of constant folding or
removal of unused variables (other than by looking at the binary)?

Wilco
Jonathan Kirwan
2007-04-11 00:25:58 UTC
Permalink
On Wed, 11 Apr 2007 00:17:56 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
Post by Jonathan Kirwan
On Tue, 10 Apr 2007 21:43:19 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
There is no difference between C and C++ in this respect.
Hmm. That may be, but wouldn't that violate the c standard? I have a
vague memory that such optimizations aren't considered strictly
correct.
The C standard defines a virtual machine which allows any optimizations
as long as the observeable side effects (eg. output) remain the same.
How could you possibly observe the effect of constant folding or
removal of unused variables (other than by looking at the binary)?
My mind already went the way you just took. But another part of me
says that the standard requires the storage be allocated. I just
can't say, not being quite that deep into being a language lawyer. Are
you?

Jon
Wilco Dijkstra
2007-04-11 00:33:49 UTC
Permalink
Post by Jonathan Kirwan
On Wed, 11 Apr 2007 00:17:56 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
Post by Jonathan Kirwan
On Tue, 10 Apr 2007 21:43:19 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
There is no difference between C and C++ in this respect.
Hmm. That may be, but wouldn't that violate the c standard? I have a
vague memory that such optimizations aren't considered strictly
correct.
The C standard defines a virtual machine which allows any optimizations
as long as the observeable side effects (eg. output) remain the same.
How could you possibly observe the effect of constant folding or
removal of unused variables (other than by looking at the binary)?
My mind already went the way you just took. But another part of me
says that the standard requires the storage be allocated. I just
can't say, not being quite that deep into being a language lawyer. Are
you?
Definitely not! But I'd always check with the language lawyers before adding
exciting new optimizations. It was a long time ago, but I recall that the only
issue in removing unused data or code could be the changed debug view.
We took the decision that with maximum optimization the debug view was
already so affected (or rather impaired) that this would hardly make a difference.
At lower optimization levels these optimizations would not be done anyway.

Wilco
Jonathan Kirwan
2007-04-11 00:47:56 UTC
Permalink
On Wed, 11 Apr 2007 00:33:49 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
Post by Jonathan Kirwan
On Wed, 11 Apr 2007 00:17:56 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
Post by Jonathan Kirwan
On Tue, 10 Apr 2007 21:43:19 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
There is no difference between C and C++ in this respect.
Hmm. That may be, but wouldn't that violate the c standard? I have a
vague memory that such optimizations aren't considered strictly
correct.
The C standard defines a virtual machine which allows any optimizations
as long as the observeable side effects (eg. output) remain the same.
How could you possibly observe the effect of constant folding or
removal of unused variables (other than by looking at the binary)?
My mind already went the way you just took. But another part of me
says that the standard requires the storage be allocated. I just
can't say, not being quite that deep into being a language lawyer. Are
you?
Definitely not! But I'd always check with the language lawyers before adding
exciting new optimizations. It was a long time ago, but I recall that the only
issue in removing unused data or code could be the changed debug view.
We took the decision that with maximum optimization the debug view was
already so affected (or rather impaired) that this would hardly make a difference.
At lower optimization levels these optimizations would not be done anyway.
Not being a language lawyer, either, I could conjur up an argument for
keeping any data definition in c. The c source may not be the entire
program and therefore the data definition may be required "elsewhere."
I think c is much more this kind of "what you see is what you get"
than c++ is, big picture. So I could argue that c definitions must
stay, while accepting that c++ definitions can be optimized away.

Of course, I will defer to those who write the compilers and worry
about these things.

Jon
Wilco Dijkstra
2007-04-11 10:58:44 UTC
Permalink
Post by Jonathan Kirwan
On Wed, 11 Apr 2007 00:33:49 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
Post by Jonathan Kirwan
On Wed, 11 Apr 2007 00:17:56 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
Post by Jonathan Kirwan
On Tue, 10 Apr 2007 21:43:19 GMT, "Wilco Dijkstra"
Post by Wilco Dijkstra
There is no difference between C and C++ in this respect.
Hmm. That may be, but wouldn't that violate the c standard? I have a
vague memory that such optimizations aren't considered strictly
correct.
The C standard defines a virtual machine which allows any optimizations
as long as the observeable side effects (eg. output) remain the same.
How could you possibly observe the effect of constant folding or
removal of unused variables (other than by looking at the binary)?
My mind already went the way you just took. But another part of me
says that the standard requires the storage be allocated. I just
can't say, not being quite that deep into being a language lawyer. Are
you?
Definitely not! But I'd always check with the language lawyers before adding
exciting new optimizations. It was a long time ago, but I recall that the only
issue in removing unused data or code could be the changed debug view.
We took the decision that with maximum optimization the debug view was
already so affected (or rather impaired) that this would hardly make a difference.
At lower optimization levels these optimizations would not be done anyway.
Not being a language lawyer, either, I could conjur up an argument for
keeping any data definition in c. The c source may not be the entire
program and therefore the data definition may be required "elsewhere."
I think c is much more this kind of "what you see is what you get"
than c++ is, big picture. So I could argue that c definitions must
stay, while accepting that c++ definitions can be optimized away.
Of course, I will defer to those who write the compilers and worry
about these things.
You're right in that in C constants can be used externally, and so could
be referred to from another object file. This means that the constant
definition must always be emitted by the compiler. One way is to place
the unused definition in its own data section, so that it is trivial for the
linker to spot as unused (no relocations to it) and remove.

Wilco
nospam
2007-04-10 22:52:52 UTC
Permalink
Post by Wilco Dijkstra
Post by Jonathan Kirwan
Post by Wilco Dijkstra
MOV r0,#1
MOV r1,#0x8000
STR r0,[r1,#4]
On CPUs that have absolute addressing this would be even more
efficient. Of course you need to recompile if you change the address.
Thanks for the counterpoint. That may be true on some c++ compiler,
but it would NOT be true for c compilers, which is what this is about
(at least, to me.)
I still wasn't clear enough - the above code is generated by a C compiler.
There is no difference between C and C++ in this respect.
The compiler/linker is much more likely to optimise away fetching a const
pointer than it is to optimise away the const pointer completely you need
to look for ptr in the data segment not at the generated code using ptr.

However,

typedef struct{int x, y, z;} T;
#define ptr ((T*)0x8000)
ptr->y = 1;

This works just the same and there is definitely no stored pointer.

--
Walter Banks
2007-04-11 13:52:39 UTC
Permalink
Post by Jonathan Kirwan
My point wasn't about situations with very small amounts of memory. I
would use this feature on very large systems with hundreds of small
source files, in fact. And use it well.
In any case, I'm sure there were a lot of things on their (ISO) plate
and I am just interested in Walter's comment. I'm not in trying to
make a case for some ISO committee, much as I would like being able to
control the value of link-time symbolics. If I were tilting that
windmill, I've got much better issues to present them which would make
a much larger difference in my life.
Thanks for the discussion,
Jon
In Byte Craft's linkers we allow link time #defines to be part of the
linker script or included into the linker script (#defines with the
same syntax as C defines) I think this would accomplish what you
are looking for.

This is not in the context of ISO language definitions.



w..

--
Walter Banks
Byte Craft Limited
1 519 888 6911
http://www.bytecraft.com
***@bytecraft.com
Jonathan Kirwan
2007-04-10 20:11:52 UTC
Permalink
On Wed, 11 Apr 2007 09:52:39 -0400, Walter Banks
Post by Walter Banks
Post by Jonathan Kirwan
My point wasn't about situations with very small amounts of memory. I
would use this feature on very large systems with hundreds of small
source files, in fact. And use it well.
In any case, I'm sure there were a lot of things on their (ISO) plate
and I am just interested in Walter's comment. I'm not in trying to
make a case for some ISO committee, much as I would like being able to
control the value of link-time symbolics. If I were tilting that
windmill, I've got much better issues to present them which would make
a much larger difference in my life.
Thanks for the discussion,
Jon
In Byte Craft's linkers we allow link time #defines to be part of the
linker script or included into the linker script (#defines with the
same syntax as C defines) I think this would accomplish what you
are looking for.
It may, if those symbolics can be referenced in the c source. But I'm
looking for source language constructs, too.
Post by Walter Banks
This is not in the context of ISO language definitions.
Agreed.

Jon
c***@hotmail.com
2007-04-10 17:18:24 UTC
Permalink
Post by R Pradeep Chandran
Are you sure that only the link process is required. I think that you
need to compile (or assemble) the file containing the declaration of
abc (or _abc).
To put it simply, somehow you need to put the new desired value of the
constant into a form that the linker can understand.

You could open an old .o file in emacs and manually alter it...

You could put the definitions alone into a small .c file and recompile
only those.
Or the same with an assembly language file that you would assemble.

You could write a linker that lets you define things on the command
line

You could write a tool that takes command line options and creates an
object file...
Walter Banks
2007-04-11 13:29:20 UTC
Permalink
Post by Jonathan Kirwan
On Mon, 09 Apr 2007 06:47:40 -0400, Walter Banks
Post by Walter Banks
Post by t***@gmail.com
I need to store at consecutive memory locations,
say starting from 0x8000h . Will appreciate if anyone can list the
syntax / instructions to go about the same.
Byte Craft's compilers have a language extension that allows
defininng the location of variables.
for example.
Hi, Walter. Does this look like a storage request to the linker? Or
does it merely create a symbolic linker value?
To put my question in exact context, one of the things I miss having
in C and readily find with assembly coding (usually) is the ability to
define link-time constants. By this, I mean the equivalent in x86 asm
_abc EQU 0x123
PUBLIC _abc
This places a symbolic value in the OBJ file that can be linked into
extern int abc;
#define MYCONST (&abc)
and then refer to MYCONST throughout the c source. The result allows
me to modify the constant definitions without requiring recompilation
of the c sources. The link process is all that is required.
If your c compiler creates the object equivalent of a PUBDEF and an
That would be interesting.
I hope that sharpens my question.
Jon
Jon,

The actual syntax is

. . . @ integer_constant_expression

so

#define MYLOC &opq+29

int abc @ MYLOC;

is legal

It is possible to define MYLOC at link time
meaning that the location of abc can be
defined at link time.

This may only partly answer the question because
we allow conditional compiles be evaluated at link
time as well. This happens as a side effect of compiling
to intermediate code objects. After the linker organizes
the objects into an application the code full application
optimized.

w..

--
Walter Banks
Byte Craft Limited
1 519 888 6911
http://www.bytecraft.com
***@bytecraft.com
Vladimir Vassilevsky
2007-04-09 12:58:23 UTC
Permalink
Post by t***@gmail.com
Hello all
I'm a fresher to Microcontroller programming. I have defined certain
global variables that I need to store at consecutive memory locations,
say starting from 0x8000h . Will appreciate if anyone can list the
syntax / instructions to go about the same.
1. Declare a section at the desired memory location in the linker file.
2. Declare your variables as an array or structure in that section in C
code.

Vladimir Vassilevsky

DSP and Mixed Signal Design Consultant

http://www.abvolt.com
Dave Hansen
2007-04-09 16:34:37 UTC
Permalink
Post by t***@gmail.com
Hello all
I'm a fresher to Microcontroller programming. I have defined certain
global variables that I need to store at consecutive memory locations,
say starting from 0x8000h . Will appreciate if anyone can list the
syntax / instructions to go about the same.
The fields of a struct will be laid down in memory in the order they
are declared, e.g.,

struct
{
int first;
int second;
char third;
int fourth;
} Contiguous;

declares a variable named Contiguous with four fields (first, second,
third, and fourth), such that &Contiguous.first < &Contiguous.second <
&Contiguous.third < Contiguous.fourth. However, there may be some
padding bytes between fields (particularly between third and fourth,
though it could be between first and second as well, depending on the
implementation's alignment requirements).

Locating the variable Contiguous at, say, 0x8000 is the job of the
linker rather than the compiler.

Regards,

-=Dave
Hans-Bernhard Bröker
2007-04-09 21:55:03 UTC
Permalink
Post by t***@gmail.com
I'm a fresher to Microcontroller programming. I have defined certain
global variables that I need to store at consecutive memory locations,
say starting from 0x8000h .
Are you sure that's actually what you _need_ to do? As opposed to what
you _want_ to do, that is?

Locating several data objects in defined sequence is what C gives you
data structures for --- you shouldn't usually be trying to do that with
individual variables, but with elements of a container data type.

Locating objects at absolute addresses is none of a C compiler's job,
really. Which is why Standard C has no syntax to do it. That kind of
work should be done either directly in assembler (because only assembler
code habitually relies on such knowledge anyway), or by the linker.
msg
2007-04-09 22:05:12 UTC
Permalink
Post by Hans-Bernhard Bröker
Locating objects at absolute addresses is none of a C compiler's job,
really. Which is why Standard C has no syntax to do it. That kind of
work should be done either directly in assembler (because only assembler
code habitually relies on such knowledge anyway), or by the linker.
Hi-Tech C's approach to this is very useful and works well; the compilers
for many architectures include low-level constructs that in many cases
avoid the need for assembly code and special link commands. It is
common to specify absolute locations using the '@' modifier as in

static unsigned char int_mask @ 0x08;

Regards,

Michael
R Pradeep Chandran
2007-04-09 23:11:02 UTC
Permalink
Post by msg
Hi-Tech C's approach to this is very useful and works well; the compilers
for many architectures include low-level constructs that in many cases
avoid the need for assembly code and special link commands. It is
It is not really an elegant solution. ISO/IEC 9899:1999 provides
#pragma directive to handle such cases. In fact many compilers use
this approach for implementation specific operations (Tasking compiler
for example). This approach makes it easier to use tools like PCLint.

Have a nice day,
Pradeep
--
All opinions are mine and do not represent the views or
policies of my employer.
R Pradeep Chandran rpc AT pobox DOT com
MisterE
2007-04-11 07:22:36 UTC
Permalink
Post by t***@gmail.com
Hello all
I'm a fresher to Microcontroller programming. I have defined certain
global variables that I need to store at consecutive memory locations,
say starting from 0x8000h . Will appreciate if anyone can list the
syntax / instructions to go about the same.
Whay you can do is make a global point and just set it to that address.

int *p;
p = *(int *)&0x8000;
Grant Edwards
2007-04-11 14:47:29 UTC
Permalink
Post by MisterE
Post by t***@gmail.com
Hello all
I'm a fresher to Microcontroller programming. I have defined certain
global variables that I need to store at consecutive memory locations,
say starting from 0x8000h . Will appreciate if anyone can list the
syntax / instructions to go about the same.
Whay you can do is make a global point and just set it to that address.
int *p;
p = *(int *)&0x8000;
Close, but no cigar you've got both an extra '*' and an extra '&'.

ITYM

int *p = (int*)0x8000;

It's almost always more efficient if you do it this way:

#define p ((int*)0x8000)

That doesn't take up any storage for 'p' and allows the
compiler to avoid loading and de-referencing the pointer.

Of course, the OP's problem is better solved by declaring his
"certain variables" as residing in a unique memory section and
then telling the linker where to put that memory section.

If he wants them in a particular order within that section,
then he needs to use a structure.
--
Grant Edwards grante Yow! Hello? Enema
at Bondage? I'm calling
visi.com because I want to be happy,
I guess...
dick
2007-04-14 20:16:19 UTC
Permalink
You can ask your low level guy to support. He should know how to
assign memory location for your globals.

And you can search the keywords: "#pragma", "section", "segment",
"link file", "linker".

good luck.
Post by t***@gmail.com
Hello all
I'm a fresher to Microcontroller programming. I have defined certain
global variables that I need to store at consecutive memory locations,
say starting from 0x8000h . Will appreciate if anyone can list the
syntax / instructions to go about the same.
thanks
techie
Continue reading on narkive:
Loading...