Compiling larger (~6MB) map initializing C++ file with gcc
I'm trying to compile a C++ file that is some 5.7 MB big. I'm building a
64-bit Linux executable on a 64-bit Linux system. g++ 4.7.2 is
unfortunately not cooperative:
g++: internal compiler error: Killed (program cc1plus)
Observing with top indicates that the process reaches about 2.2 gigs of
memory before that happens. I tried setting --param gcc-min-expand=0 and
also played with --param gcc-min-heapsize but that did not resolve the
problem. Disabling optimization with -O0 did not help either.
I also tried compiling with clang, but the results were similar. It
segfaulted after also exceeding 2 gigs of memory. I didn't try any extra
options with clang because I'm not so familiar with it.
The source file in question consists of C++11-style initialization of a
few maps.
typedef std::map<std::string, int> StringToIntMap;
StringToIntMap someData = {{"SOMESTRING", 1}, ..};
What I want is preferrably to compile the file with gcc, although if clang
can work instead, I can also live with it. It would also be helpful to
find out, from someone who knows the internals, just what is happening
behind the scenes. If I have a map of 300 000 elements where strings are
about 5 bytes long, and an int corresponds to each, that's a few megabytes
of data, and I can't readily imagine how the initializer blows it up to
the point of requiring gigabytes to compile.
And to preempt comments that I should not have such a large source file. I
know I can read the data from a data file at runtime, and that's what the
program does now, but my use case is such that the program's execution
time is the most important factor.
No comments:
Post a Comment