The `size_t` type is defined as the unsigned integral type of the `sizeof` operator. In the real world, you will often see `int` defined as 32 bits (for backward compatibility) but `size_t` defined as 64 bits (so you can declare arrays and structures more than 4 GiB in size) on 64-bit platforms. If a `long int` is also 64-bits, this is called the LP64 convention; if `long int` is 32 bits but `long long int` and pointers are 64 bits, that’s LLP64. You also might get the reverse, a program that uses 64-bit instructions for speed, but 32-bit pointers to save memory. Also, `int` is signed and `size_t` is unsigned.
There were historically a number of other platforms where addresses were wider or shorter than the native size of `int`. In fact, in the ’70s and early ’80s, this was more common than not: all the popular 8-bit microcomputers had 8-bit registers and 16-bit addresses, and the transition between 16 and 32 bits also produced many machines that had addresses wider than their registers. I occasionally still see questions here about Borland Turbo C for MS-DOS, whose Huge memory mode had 20-bit addresses stored in 32 bits on a 16-bit CPU (but which could support the 32-bit instruction set of the 80386); the Motorola 68000 had a 16-bit ALU with 32-bit registers and addresses; there were IBM mainframes with 15-bit, 24-bit or 31-bit addresses. You also still see different ALU and address-bus sizes in embedded systems.
Any time `int` is smaller than `size_t`, and you try to store the size or offset of a very large file or object in an `unsigned int`, there is the possibility that it could overflow and cause a bug. With an `int`, there is also the possibility of getting a negative number. If an `int` or `unsigned int` is wider, the program will run correctly but waste memory.
You should generally use the correct type for the purpose if you want portability. A lot of people will recommend that you use signed math instead of unsigned (to avoid nasty, subtle bugs like `1U < -3`). For that purpose, the standard library defines `ptrdiff_t` in `<stddef.h>` as the signed type of the result of subtracting a pointer from another.
That said, a workaround might be to bounds-check all addresses and offsets against `INT_MAX` and either `0` or `INT_MIN` as appropriate, and turn on the compiler warnings about comparing signed and unsigned quantities in case you miss any. You should always, always, always be checking your array accesses for overflow in C anyway.