Mogu da prenesem neka svoja iskustva sa portovanja 32-bitnog Windows koda
na 64-bitni Ubuntu i Fedoru.
Nezgodnije situacije sam imao u delovima koda gde je originalni pisac cast-ovao
int* na int (da bi uradio nekakve provere i neku malo neobicniju aritmetiku pointera
i rezultat upisao nazad u int*). Na 32 bita, velicine oba su 32-bita, dok su se na 64 bita
razisli (sizeof(int*) = 8, sizeof(int) = 4), sto je dovelo do gubitka gornja 4 bajta, i
dovelo do neprijatnih crash-eva.
Druga nezgoda se desavala sa promenljivom tipa long. Originalni 32-bitni kod je
podrazumevao da je velicina promenljive 4 bajta, dok je na 64-bitnom Linux-u velicina
8 bajtova.
Upotreba int tipova kao sto su int16_t, int32_t i slicno je znatno doprinela preciznijoj
portabilnosti (za individualne promenljive).
Naravno, kao sto @Nedeljko rece, kad su u pitanju strukture i alignment, ta preciznost
nece biti dovoljna da spreci uobicajene navike kompajlera da pronadje koji je najveci
tip podatka u strukturi pa da alocira toliki prostor za svaku clanicu strukture bez obzira
na njenu velicinu. GCC srecom podrzava #pragma pack(N)
http://gcc.gnu.org/onlinedocs/...cture_002dPacking-Pragmas.html
pa se sa tim problemom moze izaci na kraj.
Citat:
Bajt (char) je svuda 8 bita, za to nemoj da brineš.
Iako se u teoriji daje dosta slobode u pogledu definicije (cak i navode primere nekih
masina/OS-eva na kojima to nije slucaj), potpuno delim ovo misljenje.
Toliko specifikacija raznih vrsta operise sa 'oktetom' (octet) da prosto ne mogu da
zamislim da ce programerska svakodnevica da se u dogledno vreme odrekne tipa
podatka cija duzina je 8 bita (ako vec storage nije problem, valjace da postoji tip
pokazivaca koji inkrementiranjem skace za 8 bita). Ako se mozda moze ludovati na
OS-u, ne mogu se preko noci menjati tone sveukupnog ostalog hardvera koji je
dizajniran na temeljima 8-bitnog bajta.
Takodje, mislim da i short (16 bita) nece bar jos neko vreme menjati velicinu, jer
najmanje jedna bransa (audio) ima solidno ustanovljenu upotrebu za isti (dinamika
od 16 bita priblizno dobro odgovara realnim audio potrebama), tako da ne verujem
da ce da se forsira promena multimedije + conferencing + ... tek tako lako.
Ona poznata garantovana nejednacina sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)
ce se po svemu sudeci odraziti tako da ce jedini profiter biti tip long, koji na 64-bita
konacno dobiti krevet dovoljno dugacak da moze da se ispruzi k'o covek.
Na ovoj temi u odgovoru ima vrednih detalja na jednom mestu:
http://stackoverflow.com/quest...size-of-long-on-64-bit-windows
Citat:
In the Unix world, there were a few possible arrangements for the sizes of integers and pointers for 64-bit platforms. The two mostly widely used were ILP64 (actually, only a very few examples of this; Cray was one such) and LP64 (for almost everything else). The acronynms come from 'int, long, pointers are 64-bit' and 'long, pointers are 64-bit'.
Type ILP64 LP64 LLP64
char 8 8 8
short 16 16 16
int 64 32 32
long 64 64 32
long long 64 64 64
pointer 64 64 64
The ILP64 system was abandoned in favour of LP64 (that is, almost all later entrants used LP64, based on the recommendations of the Aspen group; only systems with a long heritage of 64-bit operation use a different scheme). All modern 64-bit Unix systems use LP64. MacOS X and Linux are both modern 64-bit systems.
Microsoft uses a different scheme for transitioning to 64-bit: LLP64 ('long long, pointers are 64-bit'). This has the merit of meaning that 32-bit software can be recompiled without change. It has the demerit of being different from what everyone else does, and also requires code to be revised to exploit 64-bit capacities. There always was revision necessary; it was just a different set of revisions from the ones needed on Unix platforms.
If you design your software around platform-neutral integer type names, probably using the C99 <inttypes.h> header, which, when the types are available on the platform, provides, in signed (listed) and unsigned (not listed; prefix with 'u'):
int8_t - 8-bit integers
int16_t - 16-bit integers
int32_t - 32-bit integers
int64_t - 64-bit integers
uintptr_t - unsigned integers big enough to hold pointers
intmax_t - biggest size of integer on the platform (might be larger than int64_t)
You can then code your application using these types where it matters, and being very careful with system types (which might be different). There is an intptr_t type - a signed integer type for holding pointers; you should plan on not using it, or only using it as the result of a subtraction of two uintptr_t values (ptrdiff_t).
But, as the question points out (in disbelief), there are different systems for the sizes of the integer data types on 64-bit machines. Get used to it; the world isn't going to change.