Menu Sign In Contact FAQ
Banner
Welcome to our forums

Programming languages

From here

a high-level language

Anything higher than assembler is high-level language

LDZA LDVA, Croatia

Anything higher than assembler is high-level language

And the higher the level of language, the lower the brain activity of the programmer?

Germany

Emir wrote:

Anything higher than assembler is high-level language

That was true 40 years ago, but not today. C is little more than a portable assembler.

ESKC (Uppsala/Sundbro), Sweden

I don’t think so. C is nothing like an assembler. It’s a language with loads of complex rules, some quite unexpected. Take integer promotion for example.

Administrator
Shoreham EGKA, United Kingdom

Peter wrote:

It’s a language with loads of complex rules, some quite unexpected.

As if that wasn’t true for assemblers… My point is that C is designed to do very little abstraction of the actual hardware because it was intended to implement Unix. E.g. all of the data types in C are basically hardware data types with a minimal amount of sugaring. There is really not much you can do in C that you couldn’t just as well do in a good macro assembler. Compare that to e.g. the data types in Python. (Just picking a reasonably advanced and well-known language.) Even when C was designed there were many programming languages that would be considered high level even by today’s standards.

ESKC (Uppsala/Sundbro), Sweden

if that wasn’t true for assemblers

It isn’t. They do exactly what you type in.

My point is that C is designed to do very little abstraction of the actual hardware

C does exactly that. It represents a virtual machine, as a definition.

Try inverting (~) a uint8_t Fred = 0×55 and comparing it with uint8_t Joe = 0xaa

Administrator
Shoreham EGKA, United Kingdom

Peter wrote:

It represents a virtual machine, as a definition.

In a trivial sense every programming language – even assembler – defines a virtual machine. But the one in C is extremely close to the underlying hardware. The language is also more loosely defined than almost any other language.

ESKC (Uppsala/Sundbro), Sweden

Wires totally crossed. I suggest trying my code example, with a C99 C compiler, and then explaining what happened.

An assembler (mnemonics) produces machine code on a 1:1 basis.

C does all kinds of weird stuff e.g. if the target machine “int” is larger than your variable size (practically always the case if doing uint8_t operations – have there been any C compilers which have an 8 bit wide “int”?) then the uint8_t gets promoted to “int”. Most of the time this won’t bite you but it can if you are doing anything which sets the higher order bits (8+) so e.g. on an arm32 if you do ~0×55 you don’t get 0xAA, you get 0×FFFFFFAA.

Similarly with a uint16_t x = 0×5555 you would get 0×FFFF5555 upon inversion. If that is then cast to another uint16_t, the FFFF is discarded and nobody notices, but if you do e.g.
if (x == y)
that will never compare because one or the other has the upper bits set to 1.

This possibly was not the case 30-40 years ago. Today, C is defined in terms of a VM and the compiler is free to do absolutely anything which complies with that, no matter how apparently useless. So for example unreferenced variables will be removed unless declared volatile etc (not at all the case with 1980s compilers) and you get some hilarious cases of code removal where the compiler decides that some conditional will never be met for subtle reasons and it then strips out a massive chunk of your program I spent a whole day recently working out why a jump (written in inline asm) to the base of an overlay (long story why) was not working; it was due to the foregoing. The simplest solution turned out to be an asm reference to the code being removed; optimisation never touches asm code.

The origin of C is on “big” machines, implementing unix, not 8/16 bit micros, hence historically “int” was 32 bits or bigger, and nobody thought that promoting everything to “int” was the stupid idea which I think it is. The bottom line is that on something like an arm32 there is no point in using 8 or 16 bit variables except to save storage space, or to represent byte data. It doesn’t get you more speed.

Lots of people regard C as a fancy assembler and it works ok until you do something like the above.

Administrator
Shoreham EGKA, United Kingdom

Peter wrote:

It isn’t. They do exactly what you type in.

They don’t. Arguably the most important task of the assembler is label handling and the hardware doesn’t have labels. In some architectures the assembler will even determine addressing modes for you.

Last Edited by Airborne_Again at 26 Jun 10:01
ESKC (Uppsala/Sundbro), Sweden

Peter wrote:

Lots of people regard C as a fancy assembler and it works ok until you do something like the above.

It is a fancy assembler. And it is not precisely defined, e.g. the bit size of the various data types are deliberately left undefined. There are some constraints, like a short must not be larger than an int, but otherwise the underlaying hardware is reflected in the size of the basic data types.

Your example actually demonstrates that beautifully. On an architecture with 16 bit ints ~x==y would be true. (Well, it would be 1. Again the underlaying hardware is exposed as the it doesn’t have a boolean data type.).

If you use uint32_t instead and 32-bit constants then ~x==y would be true on an architecture with 32 bit ints, but not on one with 64 bits.

Trying to avoid the integer promotion by declaring both x and y as int won’t work either as you don’t know how long the two constants have to be – that would again depend on the size of an int. (Sure, you could avoid that by writing code that creates the necessary values at runtime using sizeof(int) instead of using constants. Possibly a good optimising compiler could even optimise away that code and insert the constant.)

ESKC (Uppsala/Sundbro), Sweden
53 Posts
Sign in to add your message

Back to Top