Programming language and processor architecture dependency ?

So i've a few doubts that i would like to clarify .

Suppose i'm writing a c++ program and compiled and linked it on a RHEL machine with x86 as processor .
I compiled the same program and linked it on RHEL machine with PowerPC as processor .

Now , if i make changes to my c++ program and compile in both platforms again .
Is it ever possible that some features fail on one machine with different cpu but works on the other machine with other cpu .
I mean compiling and linking , its all OS dependent , and OS handles the processor architecture .
So if the environment on both OS's are same except the processor architecture , will it affect my program .

I think i'm missing some concepts , help is appreciated .
Thanks !
 
Not sure what you mean when you say compiling and linking is OS dependent.

Compiling and linking are done by the compiler. Obviously, the linking stage depends on the OS when there are shared libraries (not necessarily true for some languages). All modern compilers are architecture-aware. So, if you're building for the architecture that you are going to run on, you will not have any problems. Building on x86 and attempting to run on PowerPC will not work for obvious reasons.

To answer you question specifically, if the compiler claims to support the architecture that you're trying to compile and build on, then you will not have any problems running your program.
 
The answer to the question "Will it affect my program?" is both Yes and No. It depends on what all the features that you are using. Here "features" are the extensions implemented by the processors instruction set. e.g., SSE3, MMX etc...

For a simple real world example, if a video encoder is written to take the advantage of SSE features of x86 architecture, it will run faster on processors with SSE extensions.
 
Not sure what you mean when you say compiling and linking is OS dependent.

Compiling and linking are done by the compiler. Obviously, the linking stage depends on the OS when there are shared libraries (not necessarily true for some languages). All modern compilers are architecture-aware. So, if you're building for the architecture that you are going to run on, you will not have any problems. Building on x86 and attempting to run on PowerPC will not work for obvious reasons.

To answer you question specifically, if the compiler claims to support the architecture that you're trying to compile and build on, then you will not have any problems running your program.

The program is big but wont be using any processor specific instructions .
So in this case if the OS env is same and only the cpu architecture is different , testing it on both the architecture is useless , right ?
Obviously , it will be compiled separately on both architectures , but testing on only one is the answer , on the other its useless , right ?
 
Differences across arch is definitely possible depending on the kind of code you are writing. I can say that architecture specific bugs are definitely a reality and I can confirm that from personal experience of developing for x86 windows , PPC Mac OS and x86 Mac OS with a common code base. So, having the same code base is no surety that the program will work consistently across architectures. You will definitely need to test with both architectures even if its the same OS you are working with. In fact we used get many bugs specific to the architecture. Most of the times, the behavior was correct on one architecture i.e. say on x86 windows and Mac OS and incorrect on PPC Mac OS. It was a point for us to cover many of these scenarios through automated unit tests.

This is because, leaving instruction set and the big differences like CISC vs RISC aside, one very fundamental difference is that x86 is little-endean while PPC is big-endean. So there is a fundamental difference in how the data is organized in memory between the two architectures.

Imagine working with pointers to traverse large chunks of data where information may even need to be accessed at bit level. Imagine working with complex files that have to be compatible across arch/OS and you will get an idea why testing is definitely required. If you are writing a file in little endean format on x86 platform and you need to open the file on a big endean arch, do you think you can do it without some additional translation code that runs on this arch. If is arch specific pieces of code, then don't you think you will need to test it separately?
 
Last edited:
The program is big but wont be using any processor specific instructions .
So in this case if the OS env is same and only the cpu architecture is different , testing it on both the architecture is useless , right ?
Obviously , it will be compiled separately on both architectures , but testing on only one is the answer , on the other its useless , right ?
if you stick to ANSI specification of C++ you should be good. I will even work across operating systems.
 
^^ Following a standard will just ensure that the program will build across compilers that support the specification and the output behaves in a well defined manner. It doesn't account for differences in platform architectures like endianness. If you have to work with data across platforms, you will run into endianness issues for sure.
 
When compiling a program, the compiler optimizes the code. During this process, it may make use of the available platform specific optimized machine instructions which may be absent on other machines. In case of languages which run on a virtual abstraction layer; like Java; the compiler does not uses machine instructions but rather uses generic instructions provided by this abstraction layer.
 
Back
Top