openblt

a hobby OS from the late 90s
git clone http://frotz.net/git/openblt.git
Log | Files | Refs | LICENSE

malloc.c (96959B)


      1 /* $Id: //depot/blt/lib/malloc.c#2 $ */
      2 
      3 /* ---------- To make a malloc.h, start cutting here ------------ */
      4 
      5 /* 
      6   A version of malloc/free/realloc written by Doug Lea and released to the 
      7   public domain.  Send questions/comments/complaints/performance data
      8   to dl@cs.oswego.edu
      9 
     10 * VERSION 2.6.5  Wed Jun 17 15:55:16 1998  Doug Lea  (dl at gee)
     11 
     12    Note: There may be an updated version of this malloc obtainable at
     13            ftp://g.oswego.edu/pub/misc/malloc.c
     14          Check before installing!
     15 
     16    Note: This version differs from 2.6.4 only by correcting a
     17          statement ordering error that could cause failures only
     18          when calls to this malloc are interposed with calls to
     19          other memory allocators.
     20 
     21 * Why use this malloc?
     22 
     23   This is not the fastest, most space-conserving, most portable, or
     24   most tunable malloc ever written. However it is among the fastest
     25   while also being among the most space-conserving, portable and tunable.
     26   Consistent balance across these factors results in a good general-purpose 
     27   allocator. For a high-level description, see 
     28      http://g.oswego.edu/dl/html/malloc.html
     29 
     30 * Synopsis of public routines
     31 
     32   (Much fuller descriptions are contained in the program documentation below.)
     33 
     34   malloc(size_t n);
     35      Return a pointer to a newly allocated chunk of at least n bytes, or null
     36      if no space is available.
     37   free(Void_t* p);
     38      Release the chunk of memory pointed to by p, or no effect if p is null.
     39   realloc(Void_t* p, size_t n);
     40      Return a pointer to a chunk of size n that contains the same data
     41      as does chunk p up to the minimum of (n, p's size) bytes, or null
     42      if no space is available. The returned pointer may or may not be
     43      the same as p. If p is null, equivalent to malloc.  Unless the
     44      #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
     45      size argument of zero (re)allocates a minimum-sized chunk.
     46   memalign(size_t alignment, size_t n);
     47      Return a pointer to a newly allocated chunk of n bytes, aligned
     48      in accord with the alignment argument, which must be a power of
     49      two.
     50   valloc(size_t n);
     51      Equivalent to memalign(pagesize, n), where pagesize is the page
     52      size of the system (or as near to this as can be figured out from
     53      all the includes/defines below.)
     54   pvalloc(size_t n);
     55      Equivalent to valloc(minimum-page-that-holds(n)), that is,
     56      round up n to nearest pagesize.
     57   calloc(size_t unit, size_t quantity);
     58      Returns a pointer to quantity * unit bytes, with all locations
     59      set to zero.
     60   cfree(Void_t* p);
     61      Equivalent to free(p).
     62   malloc_trim(size_t pad);
     63      Release all but pad bytes of freed top-most memory back 
     64      to the system. Return 1 if successful, else 0.
     65   malloc_usable_size(Void_t* p);
     66      Report the number usable allocated bytes associated with allocated
     67      chunk p. This may or may not report more bytes than were requested,
     68      due to alignment and minimum size constraints.
     69   malloc_stats();
     70      Prints brief summary statistics on stderr.
     71   mallinfo()
     72      Returns (by copy) a struct containing various summary statistics.
     73   mallopt(int parameter_number, int parameter_value)
     74      Changes one of the tunable parameters described below. Returns
     75      1 if successful in changing the parameter, else 0.
     76 
     77 * Vital statistics:
     78 
     79   Alignment:                            8-byte
     80        8 byte alignment is currently hardwired into the design.  This
     81        seems to suffice for all current machines and C compilers.
     82 
     83   Assumed pointer representation:       4 or 8 bytes
     84        Code for 8-byte pointers is untested by me but has worked
     85        reliably by Wolfram Gloger, who contributed most of the
     86        changes supporting this.
     87 
     88   Assumed size_t  representation:       4 or 8 bytes
     89        Note that size_t is allowed to be 4 bytes even if pointers are 8.        
     90 
     91   Minimum overhead per allocated chunk: 4 or 8 bytes
     92        Each malloced chunk has a hidden overhead of 4 bytes holding size
     93        and status information.  
     94 
     95   Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
     96                           8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
     97                                      
     98        When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
     99        ptrs but 4 byte size) or 24 (for 8/8) additional bytes are 
    100        needed; 4 (8) for a trailing size field
    101        and 8 (16) bytes for free list pointers. Thus, the minimum
    102        allocatable size is 16/24/32 bytes.
    103 
    104        Even a request for zero bytes (i.e., malloc(0)) returns a
    105        pointer to something of the minimum allocatable size.
    106 
    107   Maximum allocated size: 4-byte size_t: 2^31 -  8 bytes
    108                           8-byte size_t: 2^63 - 16 bytes
    109 
    110        It is assumed that (possibly signed) size_t bit values suffice to
    111        represent chunk sizes. `Possibly signed' is due to the fact
    112        that `size_t' may be defined on a system as either a signed or
    113        an unsigned type. To be conservative, values that would appear
    114        as negative numbers are avoided.  
    115        Requests for sizes with a negative sign bit will return a
    116        minimum-sized chunk.
    117 
    118   Maximum overhead wastage per allocated chunk: normally 15 bytes
    119 
    120        Alignnment demands, plus the minimum allocatable size restriction
    121        make the normal worst-case wastage 15 bytes (i.e., up to 15
    122        more bytes will be allocated than were requested in malloc), with 
    123        two exceptions:
    124          1. Because requests for zero bytes allocate non-zero space,
    125             the worst case wastage for a request of zero bytes is 24 bytes.
    126          2. For requests >= mmap_threshold that are serviced via
    127             mmap(), the worst case wastage is 8 bytes plus the remainder
    128             from a system page (the minimal mmap unit); typically 4096 bytes.
    129 
    130 * Limitations
    131 
    132     Here are some features that are NOT currently supported
    133 
    134     * No user-definable hooks for callbacks and the like.
    135     * No automated mechanism for fully checking that all accesses
    136       to malloced memory stay within their bounds.
    137     * No support for compaction.
    138 
    139 * Synopsis of compile-time options:
    140 
    141     People have reported using previous versions of this malloc on all
    142     versions of Unix, sometimes by tweaking some of the defines
    143     below. It has been tested most extensively on Solaris and
    144     Linux. It is also reported to work on WIN32 platforms.
    145     People have also reported adapting this malloc for use in
    146     stand-alone embedded systems.
    147 
    148     The implementation is in straight, hand-tuned ANSI C.  Among other
    149     consequences, it uses a lot of macros.  Because of this, to be at
    150     all usable, this code should be compiled using an optimizing compiler
    151     (for example gcc -O2) that can simplify expressions and control
    152     paths.
    153 
    154   __STD_C                  (default: derived from C compiler defines)
    155      Nonzero if using ANSI-standard C compiler, a C++ compiler, or
    156      a C compiler sufficiently close to ANSI to get away with it.
    157   DEBUG                    (default: NOT defined)
    158      Define to enable debugging. Adds fairly extensive assertion-based 
    159      checking to help track down memory errors, but noticeably slows down
    160      execution.
    161   REALLOC_ZERO_BYTES_FREES (default: NOT defined) 
    162      Define this if you think that realloc(p, 0) should be equivalent
    163      to free(p). Otherwise, since malloc returns a unique pointer for
    164      malloc(0), so does realloc(p, 0).
    165   HAVE_MEMCPY               (default: defined)
    166      Define if you are not otherwise using ANSI STD C, but still 
    167      have memcpy and memset in your C library and want to use them.
    168      Otherwise, simple internal versions are supplied.
    169   USE_MEMCPY               (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
    170      Define as 1 if you want the C library versions of memset and
    171      memcpy called in realloc and calloc (otherwise macro versions are used). 
    172      At least on some platforms, the simple macro versions usually
    173      outperform libc versions.
    174   HAVE_MMAP                 (default: defined as 1)
    175      Define to non-zero to optionally make malloc() use mmap() to
    176      allocate very large blocks.  
    177   HAVE_MREMAP                 (default: defined as 0 unless Linux libc set)
    178      Define to non-zero to optionally make realloc() use mremap() to
    179      reallocate very large blocks.  
    180   malloc_getpagesize        (default: derived from system #includes)
    181      Either a constant or routine call returning the system page size.
    182   HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined) 
    183      Optionally define if you are on a system with a /usr/include/malloc.h
    184      that declares struct mallinfo. It is not at all necessary to
    185      define this even if you do, but will ensure consistency.
    186   INTERNAL_SIZE_T           (default: size_t)
    187      Define to a 32-bit type (probably `unsigned int') if you are on a 
    188      64-bit machine, yet do not want or need to allow malloc requests of 
    189      greater than 2^31 to be handled. This saves space, especially for
    190      very small chunks.
    191   INTERNAL_LINUX_C_LIB      (default: NOT defined)
    192      Defined only when compiled as part of Linux libc.
    193      Also note that there is some odd internal name-mangling via defines
    194      (for example, internally, `malloc' is named `mALLOc') needed
    195      when compiling in this case. These look funny but don't otherwise
    196      affect anything.
    197   WIN32                     (default: undefined)
    198      Define this on MS win (95, nt) platforms to compile in sbrk emulation.
    199   LACKS_UNISTD_H            (default: undefined)
    200      Define this if your system does not have a <unistd.h>.
    201   MORECORE                  (default: sbrk)
    202      The name of the routine to call to obtain more memory from the system.
    203   MORECORE_FAILURE          (default: -1)
    204      The value returned upon failure of MORECORE.
    205   MORECORE_CLEARS           (default 1)
    206      True (1) if the routine mapped to MORECORE zeroes out memory (which
    207      holds for sbrk).
    208   DEFAULT_TRIM_THRESHOLD
    209   DEFAULT_TOP_PAD       
    210   DEFAULT_MMAP_THRESHOLD
    211   DEFAULT_MMAP_MAX      
    212      Default values of tunable parameters (described in detail below)
    213      controlling interaction with host system routines (sbrk, mmap, etc).
    214      These values may also be changed dynamically via mallopt(). The
    215      preset defaults are those that give best performance for typical
    216      programs/systems.
    217 
    218 
    219 */
    220 
    221 /* Preliminaries */
    222 
    223 #ifndef __STD_C
    224 #ifdef __STDC__
    225 #define __STD_C     1
    226 #else
    227 #if __cplusplus
    228 #define __STD_C     1
    229 #else
    230 #define __STD_C     0
    231 #endif /*__cplusplus*/
    232 #endif /*__STDC__*/
    233 #endif /*__STD_C*/
    234 
    235 #ifndef Void_t
    236 #if __STD_C
    237 #define Void_t      void
    238 #else
    239 #define Void_t      char
    240 #endif
    241 #endif /*Void_t*/
    242 
    243 #if __STD_C
    244 #include <stddef.h>   /* for size_t */
    245 #else
    246 #include <sys/types.h>
    247 #endif
    248 
    249 #ifdef __cplusplus
    250 extern "C" {
    251 #endif
    252 
    253 #include <stdio.h>    /* needed for malloc_stats */
    254 
    255 /*
    256   Compile-time options
    257 */
    258 
    259 /*
    260     Debugging:
    261 
    262     Because freed chunks may be overwritten with link fields, this
    263     malloc will often die when freed memory is overwritten by user
    264     programs.  This can be very effective (albeit in an annoying way)
    265     in helping track down dangling pointers.
    266 
    267     If you compile with -DDEBUG, a number of assertion checks are
    268     enabled that will catch more memory errors. You probably won't be
    269     able to make much sense of the actual assertion errors, but they
    270     should help you locate incorrectly overwritten memory.  The
    271     checking is fairly extensive, and will slow down execution
    272     noticeably. Calling malloc_stats or mallinfo with DEBUG set will
    273     attempt to check every non-mmapped allocated and free chunk in the
    274     course of computing the summmaries. (By nature, mmapped regions
    275     cannot be checked very much automatically.)
    276 
    277     Setting DEBUG may also be helpful if you are trying to modify 
    278     this code. The assertions in the check routines spell out in more 
    279     detail the assumptions and invariants underlying the algorithms.
    280 
    281 */
    282 
    283 #if DEBUG 
    284 #include <assert.h>
    285 #else
    286 #define assert(x) ((void)0)
    287 #endif
    288 
    289 /*
    290   INTERNAL_SIZE_T is the word-size used for internal bookkeeping
    291   of chunk sizes. On a 64-bit machine, you can reduce malloc
    292   overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
    293   at the expense of not being able to handle requests greater than
    294   2^31. This limitation is hardly ever a concern; you are encouraged
    295   to set this. However, the default version is the same as size_t.
    296 */
    297 
    298 #ifndef INTERNAL_SIZE_T
    299 #define INTERNAL_SIZE_T size_t
    300 #endif
    301 
    302 /*
    303   REALLOC_ZERO_BYTES_FREES should be set if a call to
    304   realloc with zero bytes should be the same as a call to free.
    305   Some people think it should. Otherwise, since this malloc
    306   returns a unique pointer for malloc(0), so does realloc(p, 0). 
    307 */
    308 
    309 
    310 /*   #define REALLOC_ZERO_BYTES_FREES */
    311 
    312 /*
    313   HAVE_MEMCPY should be defined if you are not otherwise using
    314   ANSI STD C, but still have memcpy and memset in your C library
    315   and want to use them in calloc and realloc. Otherwise simple
    316   macro versions are defined here.
    317 
    318   USE_MEMCPY should be defined as 1 if you actually want to
    319   have memset and memcpy called. People report that the macro
    320   versions are often enough faster than libc versions on many
    321   systems that it is better to use them. 
    322 
    323 */
    324 
    325 #define HAVE_MEMCPY 
    326 
    327 #ifndef USE_MEMCPY
    328 #ifdef HAVE_MEMCPY
    329 #define USE_MEMCPY 1
    330 #else
    331 #define USE_MEMCPY 0
    332 #endif
    333 #endif
    334 
    335 #if (__STD_C || defined(HAVE_MEMCPY)) 
    336 
    337 #if __STD_C
    338 void* memset(void*, int, size_t);
    339 void* memcpy(void*, const void*, size_t);
    340 #else
    341 Void_t* memset();
    342 Void_t* memcpy();
    343 #endif
    344 #endif
    345 
    346 #if USE_MEMCPY
    347 
    348 /* The following macros are only invoked with (2n+1)-multiples of
    349    INTERNAL_SIZE_T units, with a positive integer n. This is exploited
    350    for fast inline execution when n is small. */
    351 
    352 #define MALLOC_ZERO(charp, nbytes)                                            \
    353 do {                                                                          \
    354   INTERNAL_SIZE_T mzsz = (nbytes);                                            \
    355   if(mzsz <= 9*sizeof(mzsz)) {                                                \
    356     INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp);                         \
    357     if(mzsz >= 5*sizeof(mzsz)) {     *mz++ = 0;                               \
    358                                      *mz++ = 0;                               \
    359       if(mzsz >= 7*sizeof(mzsz)) {   *mz++ = 0;                               \
    360                                      *mz++ = 0;                               \
    361         if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0;                               \
    362                                      *mz++ = 0; }}}                           \
    363                                      *mz++ = 0;                               \
    364                                      *mz++ = 0;                               \
    365                                      *mz   = 0;                               \
    366   } else memset((charp), 0, mzsz);                                            \
    367 } while(0)
    368 
    369 #define MALLOC_COPY(dest,src,nbytes)                                          \
    370 do {                                                                          \
    371   INTERNAL_SIZE_T mcsz = (nbytes);                                            \
    372   if(mcsz <= 9*sizeof(mcsz)) {                                                \
    373     INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src);                        \
    374     INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest);                       \
    375     if(mcsz >= 5*sizeof(mcsz)) {     *mcdst++ = *mcsrc++;                     \
    376                                      *mcdst++ = *mcsrc++;                     \
    377       if(mcsz >= 7*sizeof(mcsz)) {   *mcdst++ = *mcsrc++;                     \
    378                                      *mcdst++ = *mcsrc++;                     \
    379         if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++;                     \
    380                                      *mcdst++ = *mcsrc++; }}}                 \
    381                                      *mcdst++ = *mcsrc++;                     \
    382                                      *mcdst++ = *mcsrc++;                     \
    383                                      *mcdst   = *mcsrc  ;                     \
    384   } else memcpy(dest, src, mcsz);                                             \
    385 } while(0)
    386 
    387 #else /* !USE_MEMCPY */
    388 
    389 /* Use Duff's device for good zeroing/copying performance. */
    390 
    391 #define MALLOC_ZERO(charp, nbytes)                                            \
    392 do {                                                                          \
    393   INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
    394   long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
    395   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
    396   switch (mctmp) {                                                            \
    397     case 0: for(;;) { *mzp++ = 0;                                             \
    398     case 7:           *mzp++ = 0;                                             \
    399     case 6:           *mzp++ = 0;                                             \
    400     case 5:           *mzp++ = 0;                                             \
    401     case 4:           *mzp++ = 0;                                             \
    402     case 3:           *mzp++ = 0;                                             \
    403     case 2:           *mzp++ = 0;                                             \
    404     case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
    405   }                                                                           \
    406 } while(0)
    407 
    408 #define MALLOC_COPY(dest,src,nbytes)                                          \
    409 do {                                                                          \
    410   INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
    411   INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
    412   long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
    413   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
    414   switch (mctmp) {                                                            \
    415     case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
    416     case 7:           *mcdst++ = *mcsrc++;                                    \
    417     case 6:           *mcdst++ = *mcsrc++;                                    \
    418     case 5:           *mcdst++ = *mcsrc++;                                    \
    419     case 4:           *mcdst++ = *mcsrc++;                                    \
    420     case 3:           *mcdst++ = *mcsrc++;                                    \
    421     case 2:           *mcdst++ = *mcsrc++;                                    \
    422     case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
    423   }                                                                           \
    424 } while(0)
    425 
    426 #endif
    427 
    428 /*
    429   Define HAVE_MMAP to optionally make malloc() use mmap() to
    430   allocate very large blocks.  These will be returned to the
    431   operating system immediately after a free().
    432 */
    433 
    434 #ifndef HAVE_MMAP
    435 #define HAVE_MMAP 0
    436 #endif
    437 
    438 /*
    439   Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
    440   large blocks.  This is currently only possible on Linux with
    441   kernel versions newer than 1.3.77.
    442 */
    443 
    444 #ifndef HAVE_MREMAP
    445 #ifdef INTERNAL_LINUX_C_LIB
    446 #define HAVE_MREMAP 1
    447 #else
    448 #define HAVE_MREMAP 0
    449 #endif
    450 #endif
    451 
    452 #if HAVE_MMAP
    453 
    454 #include <unistd.h>
    455 #include <fcntl.h>
    456 #include <sys/mman.h>
    457 
    458 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
    459 #define MAP_ANONYMOUS MAP_ANON
    460 #endif
    461 
    462 #endif /* HAVE_MMAP */
    463 
    464 /*
    465   Access to system page size. To the extent possible, this malloc
    466   manages memory from the system in page-size units.
    467   
    468   The following mechanics for getpagesize were adapted from 
    469   bsd/gnu getpagesize.h 
    470 */
    471 
    472 #ifndef LACKS_UNISTD_H
    473 #  include <unistd.h>
    474 #endif
    475 
    476 #if 0
    477 #ifndef malloc_getpagesize
    478 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
    479 #    ifndef _SC_PAGE_SIZE
    480 #      define _SC_PAGE_SIZE _SC_PAGESIZE
    481 #    endif
    482 #  endif
    483 #  ifdef _SC_PAGE_SIZE
    484 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
    485 #  else
    486 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
    487        extern size_t getpagesize();
    488 #      define malloc_getpagesize getpagesize()
    489 #    else
    490 #      include <sys/param.h>
    491 #      ifdef EXEC_PAGESIZE
    492 #        define malloc_getpagesize EXEC_PAGESIZE
    493 #      else
    494 #        ifdef NBPG
    495 #          ifndef CLSIZE
    496 #            define malloc_getpagesize NBPG
    497 #          else
    498 #            define malloc_getpagesize (NBPG * CLSIZE)
    499 #          endif
    500 #        else 
    501 #          ifdef NBPC
    502 #            define malloc_getpagesize NBPC
    503 #          else
    504 #            ifdef PAGESIZE
    505 #              define malloc_getpagesize PAGESIZE
    506 #            else
    507 #              define malloc_getpagesize (4096) /* just guess */
    508 #            endif
    509 #          endif
    510 #        endif 
    511 #      endif
    512 #    endif 
    513 #  endif
    514 #endif
    515 #endif
    516 
    517 #define malloc_getpagesize (0x1000)
    518 
    519 /*
    520 
    521   This version of malloc supports the standard SVID/XPG mallinfo
    522   routine that returns a struct containing the same kind of
    523   information you can get from malloc_stats. It should work on
    524   any SVID/XPG compliant system that has a /usr/include/malloc.h
    525   defining struct mallinfo. (If you'd like to install such a thing
    526   yourself, cut out the preliminary declarations as described above
    527   and below and save them in a malloc.h file. But there's no
    528   compelling reason to bother to do this.)
    529 
    530   The main declaration needed is the mallinfo struct that is returned
    531   (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
    532   bunch of fields, most of which are not even meaningful in this
    533   version of malloc. Some of these fields are are instead filled by
    534   mallinfo() with other numbers that might possibly be of interest.
    535 
    536   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
    537   /usr/include/malloc.h file that includes a declaration of struct
    538   mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
    539   version is declared below.  These must be precisely the same for
    540   mallinfo() to work.
    541 
    542 */
    543 
    544 /* #define HAVE_USR_INCLUDE_MALLOC_H */
    545 
    546 #if HAVE_USR_INCLUDE_MALLOC_H
    547 #include "/usr/include/malloc.h"
    548 #else
    549 
    550 /* SVID2/XPG mallinfo structure */
    551 
    552 struct mallinfo {
    553   int arena;    /* total space allocated from system */
    554   int ordblks;  /* number of non-inuse chunks */
    555   int smblks;   /* unused -- always zero */
    556   int hblks;    /* number of mmapped regions */
    557   int hblkhd;   /* total space in mmapped regions */
    558   int usmblks;  /* unused -- always zero */
    559   int fsmblks;  /* unused -- always zero */
    560   int uordblks; /* total allocated space */
    561   int fordblks; /* total non-inuse space */
    562   int keepcost; /* top-most, releasable (via malloc_trim) space */
    563 };	
    564 
    565 /* SVID2/XPG mallopt options */
    566 
    567 #define M_MXFAST  1    /* UNUSED in this malloc */
    568 #define M_NLBLKS  2    /* UNUSED in this malloc */
    569 #define M_GRAIN   3    /* UNUSED in this malloc */
    570 #define M_KEEP    4    /* UNUSED in this malloc */
    571 
    572 #endif
    573 
    574 /* mallopt options that actually do something */
    575 
    576 #define M_TRIM_THRESHOLD    -1
    577 #define M_TOP_PAD           -2
    578 #define M_MMAP_THRESHOLD    -3
    579 #define M_MMAP_MAX          -4
    580 
    581 #ifndef DEFAULT_TRIM_THRESHOLD
    582 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
    583 #endif
    584 
    585 /*
    586     M_TRIM_THRESHOLD is the maximum amount of unused top-most memory 
    587       to keep before releasing via malloc_trim in free().
    588 
    589       Automatic trimming is mainly useful in long-lived programs.
    590       Because trimming via sbrk can be slow on some systems, and can
    591       sometimes be wasteful (in cases where programs immediately
    592       afterward allocate more large chunks) the value should be high
    593       enough so that your overall system performance would improve by
    594       releasing.  
    595 
    596       The trim threshold and the mmap control parameters (see below)
    597       can be traded off with one another. Trimming and mmapping are
    598       two different ways of releasing unused memory back to the
    599       system. Between these two, it is often possible to keep
    600       system-level demands of a long-lived program down to a bare
    601       minimum. For example, in one test suite of sessions measuring
    602       the XF86 X server on Linux, using a trim threshold of 128K and a
    603       mmap threshold of 192K led to near-minimal long term resource
    604       consumption.  
    605 
    606       If you are using this malloc in a long-lived program, it should
    607       pay to experiment with these values.  As a rough guide, you
    608       might set to a value close to the average size of a process
    609       (program) running on your system.  Releasing this much memory
    610       would allow such a process to run in memory.  Generally, it's
    611       worth it to tune for trimming rather tham memory mapping when a
    612       program undergoes phases where several large chunks are
    613       allocated and released in ways that can reuse each other's
    614       storage, perhaps mixed with phases where there are no such
    615       chunks at all.  And in well-behaved long-lived programs,
    616       controlling release of large blocks via trimming versus mapping
    617       is usually faster.
    618 
    619       However, in most programs, these parameters serve mainly as
    620       protection against the system-level effects of carrying around
    621       massive amounts of unneeded memory. Since frequent calls to
    622       sbrk, mmap, and munmap otherwise degrade performance, the default
    623       parameters are set to relatively high values that serve only as
    624       safeguards.
    625 
    626       The default trim value is high enough to cause trimming only in
    627       fairly extreme (by current memory consumption standards) cases.
    628       It must be greater than page size to have any useful effect.  To
    629       disable trimming completely, you can set to (unsigned long)(-1);
    630 */
    631 
    632 #ifndef DEFAULT_TOP_PAD
    633 #define DEFAULT_TOP_PAD        (0)
    634 #endif
    635 
    636 /*
    637     M_TOP_PAD is the amount of extra `padding' space to allocate or 
    638       retain whenever sbrk is called. It is used in two ways internally:
    639 
    640       * When sbrk is called to extend the top of the arena to satisfy
    641         a new malloc request, this much padding is added to the sbrk
    642         request.
    643 
    644       * When malloc_trim is called automatically from free(),
    645         it is used as the `pad' argument.
    646 
    647       In both cases, the actual amount of padding is rounded 
    648       so that the end of the arena is always a system page boundary.
    649 
    650       The main reason for using padding is to avoid calling sbrk so
    651       often. Having even a small pad greatly reduces the likelihood
    652       that nearly every malloc request during program start-up (or
    653       after trimming) will invoke sbrk, which needlessly wastes
    654       time. 
    655 
    656       Automatic rounding-up to page-size units is normally sufficient
    657       to avoid measurable overhead, so the default is 0.  However, in
    658       systems where sbrk is relatively slow, it can pay to increase
    659       this value, at the expense of carrying around more memory than 
    660       the program needs.
    661 
    662 */
    663 
    664 #ifndef DEFAULT_MMAP_THRESHOLD
    665 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
    666 #endif
    667 
    668 /*
    669     M_MMAP_THRESHOLD is the request size threshold for using mmap() 
    670       to service a request. Requests of at least this size that cannot 
    671       be allocated using already-existing space will be serviced via mmap.  
    672       (If enough normal freed space already exists it is used instead.)
    673 
    674       Using mmap segregates relatively large chunks of memory so that
    675       they can be individually obtained and released from the host
    676       system. A request serviced through mmap is never reused by any
    677       other request (at least not directly; the system may just so
    678       happen to remap successive requests to the same locations).
    679 
    680       Segregating space in this way has the benefit that mmapped space
    681       can ALWAYS be individually released back to the system, which
    682       helps keep the system level memory demands of a long-lived
    683       program low. Mapped memory can never become `locked' between
    684       other chunks, as can happen with normally allocated chunks, which
    685       menas that even trimming via malloc_trim would not release them.
    686 
    687       However, it has the disadvantages that:
    688 
    689          1. The space cannot be reclaimed, consolidated, and then
    690             used to service later requests, as happens with normal chunks. 
    691          2. It can lead to more wastage because of mmap page alignment
    692             requirements
    693          3. It causes malloc performance to be more dependent on host
    694             system memory management support routines which may vary in
    695             implementation quality and may impose arbitrary
    696             limitations. Generally, servicing a request via normal
    697             malloc steps is faster than going through a system's mmap.
    698 
    699       All together, these considerations should lead you to use mmap
    700       only for relatively large requests.  
    701 */
    702 
    703 #ifndef DEFAULT_MMAP_MAX
    704 #if HAVE_MMAP
    705 #define DEFAULT_MMAP_MAX       (64)
    706 #else
    707 #define DEFAULT_MMAP_MAX       (0)
    708 #endif
    709 #endif
    710 
    711 /*
    712     M_MMAP_MAX is the maximum number of requests to simultaneously 
    713       service using mmap. This parameter exists because:
    714 
    715          1. Some systems have a limited number of internal tables for
    716             use by mmap.
    717          2. In most systems, overreliance on mmap can degrade overall
    718             performance.
    719          3. If a program allocates many large regions, it is probably
    720             better off using normal sbrk-based allocation routines that
    721             can reclaim and reallocate normal heap memory. Using a
    722             small value allows transition into this mode after the
    723             first few allocations.
    724 
    725       Setting to 0 disables all use of mmap.  If HAVE_MMAP is not set,
    726       the default value is 0, and attempts to set it to non-zero values
    727       in mallopt will fail.
    728 */
    729 
    730 /* 
    731 
    732   Special defines for linux libc
    733 
    734   Except when compiled using these special defines for Linux libc
    735   using weak aliases, this malloc is NOT designed to work in
    736   multithreaded applications.  No semaphores or other concurrency
    737   control are provided to ensure that multiple malloc or free calls
    738   don't run at the same time, which could be disasterous. A single
    739   semaphore could be used across malloc, realloc, and free (which is
    740   essentially the effect of the linux weak alias approach). It would
    741   be hard to obtain finer granularity.
    742 */
    743 
    744 #ifdef INTERNAL_LINUX_C_LIB
    745 
    746 #if __STD_C
    747 
    748 Void_t * __default_morecore_init (ptrdiff_t);
    749 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
    750 
    751 #else
    752 
    753 Void_t * __default_morecore_init ();
    754 Void_t *(*__morecore)() = __default_morecore_init;
    755 
    756 #endif
    757 
    758 #define MORECORE (*__morecore)
    759 #define MORECORE_FAILURE 0
    760 #define MORECORE_CLEARS 1 
    761 
    762 #else /* INTERNAL_LINUX_C_LIB */
    763 
    764 #if __STD_C
    765 extern Void_t*     sbrk(int ptrdiff_t);
    766 #else
    767 extern Void_t*     sbrk();
    768 #endif
    769 
    770 #ifndef MORECORE
    771 #define MORECORE sbrk
    772 #endif
    773 
    774 #ifndef MORECORE_FAILURE
    775 #define MORECORE_FAILURE -1
    776 #endif
    777 
    778 #ifndef MORECORE_CLEARS
    779 #define MORECORE_CLEARS 1
    780 #endif
    781 
    782 #endif /* INTERNAL_LINUX_C_LIB */
    783 
    784 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
    785 
    786 #define cALLOc		__libc_calloc
    787 #define fREe		__libc_free
    788 #define mALLOc		__libc_malloc
    789 #define mEMALIGn	__libc_memalign
    790 #define rEALLOc		__libc_realloc
    791 #define vALLOc		__libc_valloc
    792 #define pvALLOc		__libc_pvalloc
    793 #define mALLINFo	__libc_mallinfo
    794 #define mALLOPt		__libc_mallopt
    795 
    796 #pragma weak calloc = __libc_calloc
    797 #pragma weak free = __libc_free
    798 #pragma weak cfree = __libc_free
    799 #pragma weak malloc = __libc_malloc
    800 #pragma weak memalign = __libc_memalign
    801 #pragma weak realloc = __libc_realloc
    802 #pragma weak valloc = __libc_valloc
    803 #pragma weak pvalloc = __libc_pvalloc
    804 #pragma weak mallinfo = __libc_mallinfo
    805 #pragma weak mallopt = __libc_mallopt
    806 
    807 #else
    808 
    809 
    810 #define cALLOc		__calloc
    811 #define fREe		  __free
    812 #define mALLOc		__malloc
    813 #define mEMALIGn	__memalign
    814 #define rEALLOc		__realloc
    815 #define vALLOc		__valloc
    816 #define pvALLOc		__pvalloc
    817 #define mALLINFo	__mallinfo
    818 #define mALLOPt		__mallopt
    819 
    820 #endif
    821 
    822 /* Public routines */
    823 
    824 #if __STD_C
    825 
    826 Void_t* mALLOc(size_t);
    827 void    fREe(Void_t*);
    828 Void_t* rEALLOc(Void_t*, size_t);
    829 Void_t* mEMALIGn(size_t, size_t);
    830 Void_t* vALLOc(size_t);
    831 Void_t* pvALLOc(size_t);
    832 Void_t* cALLOc(size_t, size_t);
    833 void    cfree(Void_t*);
    834 int     malloc_trim(size_t);
    835 size_t  malloc_usable_size(Void_t*);
    836 void    malloc_stats();
    837 int     mALLOPt(int, int);
    838 struct mallinfo mALLINFo(void);
    839 #else
    840 Void_t* mALLOc();
    841 void    fREe();
    842 Void_t* rEALLOc();
    843 Void_t* mEMALIGn();
    844 Void_t* vALLOc();
    845 Void_t* pvALLOc();
    846 Void_t* cALLOc();
    847 void    cfree();
    848 int     malloc_trim();
    849 size_t  malloc_usable_size();
    850 void    malloc_stats();
    851 int     mALLOPt();
    852 struct mallinfo mALLINFo();
    853 #endif
    854 
    855 #ifdef __cplusplus
    856 };  /* end of extern "C" */
    857 #endif
    858 
    859 /* ---------- To make a malloc.h, end cutting here ------------ */
    860 
    861 /*
    862   Type declarations
    863 */
    864 
    865 struct malloc_chunk
    866 {
    867   INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
    868   INTERNAL_SIZE_T size;      /* Size in bytes, including overhead. */
    869   struct malloc_chunk* fd;   /* double links -- used only if free. */
    870   struct malloc_chunk* bk;
    871 };
    872 
    873 typedef struct malloc_chunk* mchunkptr;
    874 
    875 /*
    876    malloc_chunk details:
    877 
    878     (The following includes lightly edited explanations by Colin Plumb.)
    879 
    880     Chunks of memory are maintained using a `boundary tag' method as
    881     described in e.g., Knuth or Standish.  (See the paper by Paul
    882     Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
    883     survey of such techniques.)  Sizes of free chunks are stored both
    884     in the front of each chunk and at the end.  This makes
    885     consolidating fragmented chunks into bigger chunks very fast.  The
    886     size fields also hold bits representing whether chunks are free or
    887     in use.
    888 
    889     An allocated chunk looks like this:  
    890 
    891 
    892     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    893             |             Size of previous chunk, if allocated            | |
    894             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    895             |             Size of chunk, in bytes                         |P|
    896       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    897             |             User data starts here...                          .
    898             .                                                               .
    899             .             (malloc_usable_space() bytes)                     .
    900             .                                                               |
    901 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    902             |             Size of chunk                                     |
    903             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    904 
    905 
    906     Where "chunk" is the front of the chunk for the purpose of most of
    907     the malloc code, but "mem" is the pointer that is returned to the
    908     user.  "Nextchunk" is the beginning of the next contiguous chunk.
    909 
    910     Chunks always begin on even word boundries, so the mem portion
    911     (which is returned to the user) is also on an even word boundary, and
    912     thus double-word aligned.
    913 
    914     Free chunks are stored in circular doubly-linked lists, and look like this:
    915 
    916     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    917             |             Size of previous chunk                            |
    918             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    919     `head:' |             Size of chunk, in bytes                         |P|
    920       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    921             |             Forward pointer to next chunk in list             |
    922             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    923             |             Back pointer to previous chunk in list            |
    924             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    925             |             Unused space (may be 0 bytes long)                .
    926             .                                                               .
    927             .                                                               |
    928 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    929     `foot:' |             Size of chunk, in bytes                           |
    930             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    931 
    932     The P (PREV_INUSE) bit, stored in the unused low-order bit of the
    933     chunk size (which is always a multiple of two words), is an in-use
    934     bit for the *previous* chunk.  If that bit is *clear*, then the
    935     word before the current chunk size contains the previous chunk
    936     size, and can be used to find the front of the previous chunk.
    937     (The very first chunk allocated always has this bit set,
    938     preventing access to non-existent (or non-owned) memory.)
    939 
    940     Note that the `foot' of the current chunk is actually represented
    941     as the prev_size of the NEXT chunk. (This makes it easier to
    942     deal with alignments etc).
    943 
    944     The two exceptions to all this are 
    945 
    946      1. The special chunk `top', which doesn't bother using the 
    947         trailing size field since there is no
    948         next contiguous chunk that would have to index off it. (After
    949         initialization, `top' is forced to always exist.  If it would
    950         become less than MINSIZE bytes long, it is replenished via
    951         malloc_extend_top.)
    952 
    953      2. Chunks allocated via mmap, which have the second-lowest-order
    954         bit (IS_MMAPPED) set in their size fields.  Because they are
    955         never merged or traversed from any other chunk, they have no
    956         foot size or inuse information.
    957 
    958     Available chunks are kept in any of several places (all declared below):
    959 
    960     * `av': An array of chunks serving as bin headers for consolidated
    961        chunks. Each bin is doubly linked.  The bins are approximately
    962        proportionally (log) spaced.  There are a lot of these bins
    963        (128). This may look excessive, but works very well in
    964        practice.  All procedures maintain the invariant that no
    965        consolidated chunk physically borders another one. Chunks in
    966        bins are kept in size order, with ties going to the
    967        approximately least recently used chunk.
    968 
    969        The chunks in each bin are maintained in decreasing sorted order by
    970        size.  This is irrelevant for the small bins, which all contain
    971        the same-sized chunks, but facilitates best-fit allocation for
    972        larger chunks. (These lists are just sequential. Keeping them in
    973        order almost never requires enough traversal to warrant using
    974        fancier ordered data structures.)  Chunks of the same size are
    975        linked with the most recently freed at the front, and allocations
    976        are taken from the back.  This results in LRU or FIFO allocation
    977        order, which tends to give each chunk an equal opportunity to be
    978        consolidated with adjacent freed chunks, resulting in larger free
    979        chunks and less fragmentation. 
    980 
    981     * `top': The top-most available chunk (i.e., the one bordering the
    982        end of available memory) is treated specially. It is never
    983        included in any bin, is used only if no other chunk is
    984        available, and is released back to the system if it is very
    985        large (see M_TRIM_THRESHOLD).
    986 
    987     * `last_remainder': A bin holding only the remainder of the
    988        most recently split (non-top) chunk. This bin is checked
    989        before other non-fitting chunks, so as to provide better
    990        locality for runs of sequentially allocated chunks. 
    991 
    992     *  Implicitly, through the host system's memory mapping tables.
    993        If supported, requests greater than a threshold are usually 
    994        serviced via calls to mmap, and then later released via munmap.
    995 
    996 */
    997 
    998 /*  sizes, alignments */
    999 
   1000 #define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
   1001 #define MALLOC_ALIGNMENT       (SIZE_SZ + SIZE_SZ)
   1002 #define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
   1003 #define MINSIZE                (sizeof(struct malloc_chunk))
   1004 
   1005 /* conversion from malloc headers to user pointers, and back */
   1006 
   1007 #define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
   1008 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
   1009 
   1010 /* pad request bytes into a usable size */
   1011 
   1012 #define request2size(req) \
   1013  (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
   1014   (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
   1015    (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
   1016 
   1017 /* Check if m has acceptable alignment */
   1018 
   1019 #define aligned_OK(m)    (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
   1020 
   1021 /* 
   1022   Physical chunk operations  
   1023 */
   1024 
   1025 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
   1026 
   1027 #define PREV_INUSE 0x1 
   1028 
   1029 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
   1030 
   1031 #define IS_MMAPPED 0x2
   1032 
   1033 /* Bits to mask off when extracting size */
   1034 
   1035 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
   1036 
   1037 /* Ptr to next physical malloc_chunk. */
   1038 
   1039 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
   1040 
   1041 /* Ptr to previous physical malloc_chunk */
   1042 
   1043 #define prev_chunk(p)\
   1044    ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
   1045 
   1046 /* Treat space at ptr + offset as a chunk */
   1047 
   1048 #define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
   1049 
   1050 /* 
   1051   Dealing with use bits 
   1052 */
   1053 
   1054 /* extract p's inuse bit */
   1055 
   1056 #define inuse(p)\
   1057 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
   1058 
   1059 /* extract inuse bit of previous chunk */
   1060 
   1061 #define prev_inuse(p)  ((p)->size & PREV_INUSE)
   1062 
   1063 /* check for mmap()'ed chunk */
   1064 
   1065 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
   1066 
   1067 /* set/clear chunk as in use without otherwise disturbing */
   1068 
   1069 #define set_inuse(p)\
   1070 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
   1071 
   1072 #define clear_inuse(p)\
   1073 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
   1074 
   1075 /* check/set/clear inuse bits in known places */
   1076 
   1077 #define inuse_bit_at_offset(p, s)\
   1078  (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
   1079 
   1080 #define set_inuse_bit_at_offset(p, s)\
   1081  (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
   1082 
   1083 #define clear_inuse_bit_at_offset(p, s)\
   1084  (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
   1085 
   1086 /* 
   1087   Dealing with size fields 
   1088 */
   1089 
   1090 /* Get size, ignoring use bits */
   1091 
   1092 #define chunksize(p)          ((p)->size & ~(SIZE_BITS))
   1093 
   1094 /* Set size at head, without disturbing its use bit */
   1095 
   1096 #define set_head_size(p, s)   ((p)->size = (((p)->size & PREV_INUSE) | (s)))
   1097 
   1098 /* Set size/use ignoring previous bits in header */
   1099 
   1100 #define set_head(p, s)        ((p)->size = (s))
   1101 
   1102 /* Set size at footer (only when chunk is not in use) */
   1103 
   1104 #define set_foot(p, s)   (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
   1105 
   1106 /*
   1107    Bins
   1108 
   1109     The bins, `av_' are an array of pairs of pointers serving as the
   1110     heads of (initially empty) doubly-linked lists of chunks, laid out
   1111     in a way so that each pair can be treated as if it were in a
   1112     malloc_chunk. (This way, the fd/bk offsets for linking bin heads
   1113     and chunks are the same).
   1114 
   1115     Bins for sizes < 512 bytes contain chunks of all the same size, spaced
   1116     8 bytes apart. Larger bins are approximately logarithmically
   1117     spaced. (See the table below.) The `av_' array is never mentioned
   1118     directly in the code, but instead via bin access macros.
   1119 
   1120     Bin layout:
   1121 
   1122     64 bins of size       8
   1123     32 bins of size      64
   1124     16 bins of size     512
   1125      8 bins of size    4096
   1126      4 bins of size   32768
   1127      2 bins of size  262144
   1128      1 bin  of size what's left
   1129 
   1130     There is actually a little bit of slop in the numbers in bin_index
   1131     for the sake of speed. This makes no difference elsewhere.
   1132 
   1133     The special chunks `top' and `last_remainder' get their own bins,
   1134     (this is implemented via yet more trickery with the av_ array),
   1135     although `top' is never properly linked to its bin since it is
   1136     always handled specially.
   1137 
   1138 */
   1139 
   1140 #define NAV             128   /* number of bins */
   1141 
   1142 typedef struct malloc_chunk* mbinptr;
   1143 
   1144 /* access macros */
   1145 
   1146 #define bin_at(i)      ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
   1147 #define next_bin(b)    ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
   1148 #define prev_bin(b)    ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
   1149 
   1150 /*
   1151    The first 2 bins are never indexed. The corresponding av_ cells are instead
   1152    used for bookkeeping. This is not to save space, but to simplify
   1153    indexing, maintain locality, and avoid some initialization tests.
   1154 */
   1155 
   1156 #define top            (bin_at(0)->fd)   /* The topmost chunk */
   1157 #define last_remainder (bin_at(1))       /* remainder from last split */
   1158 
   1159 /*
   1160    Because top initially points to its own bin with initial
   1161    zero size, thus forcing extension on the first malloc request, 
   1162    we avoid having any special code in malloc to check whether 
   1163    it even exists yet. But we still need to in malloc_extend_top.
   1164 */
   1165 
   1166 #define initial_top    ((mchunkptr)(bin_at(0)))
   1167 
   1168 /* Helper macro to initialize bins */
   1169 
   1170 #define IAV(i)  bin_at(i), bin_at(i)
   1171 
   1172 static mbinptr av_[NAV * 2 + 2] = {
   1173  0, 0,
   1174  IAV(0),   IAV(1),   IAV(2),   IAV(3),   IAV(4),   IAV(5),   IAV(6),   IAV(7),
   1175  IAV(8),   IAV(9),   IAV(10),  IAV(11),  IAV(12),  IAV(13),  IAV(14),  IAV(15),
   1176  IAV(16),  IAV(17),  IAV(18),  IAV(19),  IAV(20),  IAV(21),  IAV(22),  IAV(23),
   1177  IAV(24),  IAV(25),  IAV(26),  IAV(27),  IAV(28),  IAV(29),  IAV(30),  IAV(31),
   1178  IAV(32),  IAV(33),  IAV(34),  IAV(35),  IAV(36),  IAV(37),  IAV(38),  IAV(39),
   1179  IAV(40),  IAV(41),  IAV(42),  IAV(43),  IAV(44),  IAV(45),  IAV(46),  IAV(47),
   1180  IAV(48),  IAV(49),  IAV(50),  IAV(51),  IAV(52),  IAV(53),  IAV(54),  IAV(55),
   1181  IAV(56),  IAV(57),  IAV(58),  IAV(59),  IAV(60),  IAV(61),  IAV(62),  IAV(63),
   1182  IAV(64),  IAV(65),  IAV(66),  IAV(67),  IAV(68),  IAV(69),  IAV(70),  IAV(71),
   1183  IAV(72),  IAV(73),  IAV(74),  IAV(75),  IAV(76),  IAV(77),  IAV(78),  IAV(79),
   1184  IAV(80),  IAV(81),  IAV(82),  IAV(83),  IAV(84),  IAV(85),  IAV(86),  IAV(87),
   1185  IAV(88),  IAV(89),  IAV(90),  IAV(91),  IAV(92),  IAV(93),  IAV(94),  IAV(95),
   1186  IAV(96),  IAV(97),  IAV(98),  IAV(99),  IAV(100), IAV(101), IAV(102), IAV(103),
   1187  IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
   1188  IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
   1189  IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
   1190 };
   1191 
   1192 /* field-extraction macros */
   1193 
   1194 #define first(b) ((b)->fd)
   1195 #define last(b)  ((b)->bk)
   1196 
   1197 /* 
   1198   Indexing into bins
   1199 */
   1200 
   1201 #define bin_index(sz)                                                          \
   1202 (((((unsigned long)(sz)) >> 9) ==    0) ?       (((unsigned long)(sz)) >>  3): \
   1203  ((((unsigned long)(sz)) >> 9) <=    4) ?  56 + (((unsigned long)(sz)) >>  6): \
   1204  ((((unsigned long)(sz)) >> 9) <=   20) ?  91 + (((unsigned long)(sz)) >>  9): \
   1205  ((((unsigned long)(sz)) >> 9) <=   84) ? 110 + (((unsigned long)(sz)) >> 12): \
   1206  ((((unsigned long)(sz)) >> 9) <=  340) ? 119 + (((unsigned long)(sz)) >> 15): \
   1207  ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
   1208                                           126)                     
   1209 /* 
   1210   bins for chunks < 512 are all spaced 8 bytes apart, and hold
   1211   identically sized chunks. This is exploited in malloc.
   1212 */
   1213 
   1214 #define MAX_SMALLBIN         63
   1215 #define MAX_SMALLBIN_SIZE   512
   1216 #define SMALLBIN_WIDTH        8
   1217 
   1218 #define smallbin_index(sz)  (((unsigned long)(sz)) >> 3)
   1219 
   1220 /* 
   1221    Requests are `small' if both the corresponding and the next bin are small
   1222 */
   1223 
   1224 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
   1225 
   1226 /*
   1227     To help compensate for the large number of bins, a one-level index
   1228     structure is used for bin-by-bin searching.  `binblocks' is a
   1229     one-word bitvector recording whether groups of BINBLOCKWIDTH bins
   1230     have any (possibly) non-empty bins, so they can be skipped over
   1231     all at once during during traversals. The bits are NOT always
   1232     cleared as soon as all bins in a block are empty, but instead only
   1233     when all are noticed to be empty during traversal in malloc.
   1234 */
   1235 
   1236 #define BINBLOCKWIDTH     4   /* bins per block */
   1237 
   1238 #define binblocks      (bin_at(0)->size) /* bitvector of nonempty blocks */
   1239 
   1240 /* bin<->block macros */
   1241 
   1242 #define idx2binblock(ix)    ((unsigned)1 << (ix / BINBLOCKWIDTH))
   1243 #define mark_binblock(ii)   (binblocks |= idx2binblock(ii))
   1244 #define clear_binblock(ii)  (binblocks &= ~(idx2binblock(ii)))
   1245 
   1246 /*  Other static bookkeeping data */
   1247 
   1248 /* variables holding tunable values */
   1249 
   1250 static unsigned long trim_threshold   = DEFAULT_TRIM_THRESHOLD;
   1251 static unsigned long top_pad          = DEFAULT_TOP_PAD;
   1252 static unsigned int  n_mmaps_max      = DEFAULT_MMAP_MAX;
   1253 static unsigned long mmap_threshold   = DEFAULT_MMAP_THRESHOLD;
   1254 
   1255 /* The first value returned from sbrk */
   1256 static char* sbrk_base = (char*)(-1);
   1257 
   1258 /* The maximum memory obtained from system via sbrk */
   1259 static unsigned long max_sbrked_mem = 0; 
   1260 
   1261 /* The maximum via either sbrk or mmap */
   1262 static unsigned long max_total_mem = 0; 
   1263 
   1264 /* internal working copy of mallinfo */
   1265 static struct mallinfo current_mallinfo = {  0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
   1266 
   1267 /* The total memory obtained from system via sbrk */
   1268 #define sbrked_mem  (current_mallinfo.arena)
   1269 
   1270 /* Tracking mmaps */
   1271 
   1272 static unsigned int n_mmaps = 0;
   1273 static unsigned long mmapped_mem = 0;
   1274 
   1275 /* 
   1276   Debugging support 
   1277 */
   1278 
   1279 #if DEBUG
   1280 
   1281 /*
   1282   These routines make a number of assertions about the states
   1283   of data structures that should be true at all times. If any
   1284   are not true, it's very likely that a user program has somehow
   1285   trashed memory. (It's also possible that there is a coding error
   1286   in malloc. In which case, please report it!)
   1287 */
   1288 
   1289 #if __STD_C
   1290 static void do_check_chunk(mchunkptr p) 
   1291 #else
   1292 static void do_check_chunk(p) mchunkptr p;
   1293 #endif
   1294 { 
   1295   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
   1296 
   1297   /* No checkable chunk is mmapped */
   1298   assert(!chunk_is_mmapped(p));
   1299 
   1300   /* Check for legal address ... */
   1301   assert((char*)p >= sbrk_base);
   1302   if (p != top) 
   1303     assert((char*)p + sz <= (char*)top);
   1304   else
   1305     assert((char*)p + sz <= sbrk_base + sbrked_mem);
   1306 }
   1307 
   1308 #if __STD_C
   1309 static void do_check_free_chunk(mchunkptr p) 
   1310 #else
   1311 static void do_check_free_chunk(p) mchunkptr p;
   1312 #endif
   1313 { 
   1314   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
   1315   mchunkptr next = chunk_at_offset(p, sz);
   1316 
   1317   do_check_chunk(p);
   1318 
   1319   /* Check whether it claims to be free ... */
   1320   assert(!inuse(p));
   1321 
   1322   /* Unless a special marker, must have OK fields */
   1323   if ((long)sz >= (long)MINSIZE)
   1324   {
   1325     assert((sz & MALLOC_ALIGN_MASK) == 0);
   1326     assert(aligned_OK(chunk2mem(p)));
   1327     /* ... matching footer field */
   1328     assert(next->prev_size == sz);
   1329     /* ... and is fully consolidated */
   1330     assert(prev_inuse(p));
   1331     assert (next == top || inuse(next));
   1332     
   1333     /* ... and has minimally sane links */
   1334     assert(p->fd->bk == p);
   1335     assert(p->bk->fd == p);
   1336   }
   1337   else /* markers are always of size SIZE_SZ */
   1338     assert(sz == SIZE_SZ); 
   1339 }
   1340 
   1341 #if __STD_C
   1342 static void do_check_inuse_chunk(mchunkptr p) 
   1343 #else
   1344 static void do_check_inuse_chunk(p) mchunkptr p;
   1345 #endif
   1346 { 
   1347   mchunkptr next = next_chunk(p);
   1348   do_check_chunk(p);
   1349 
   1350   /* Check whether it claims to be in use ... */
   1351   assert(inuse(p));
   1352 
   1353   /* ... and is surrounded by OK chunks.
   1354     Since more things can be checked with free chunks than inuse ones,
   1355     if an inuse chunk borders them and debug is on, it's worth doing them.
   1356   */
   1357   if (!prev_inuse(p)) 
   1358   {
   1359     mchunkptr prv = prev_chunk(p);
   1360     assert(next_chunk(prv) == p);
   1361     do_check_free_chunk(prv);
   1362   }
   1363   if (next == top)
   1364   {
   1365     assert(prev_inuse(next));
   1366     assert(chunksize(next) >= MINSIZE);
   1367   }
   1368   else if (!inuse(next))
   1369     do_check_free_chunk(next);
   1370 }
   1371 
   1372 #if __STD_C
   1373 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) 
   1374 #else
   1375 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
   1376 #endif
   1377 {
   1378   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
   1379   long room = sz - s;
   1380 
   1381   do_check_inuse_chunk(p);
   1382 
   1383   /* Legal size ... */
   1384   assert((long)sz >= (long)MINSIZE);
   1385   assert((sz & MALLOC_ALIGN_MASK) == 0);
   1386   assert(room >= 0);
   1387   assert(room < (long)MINSIZE);
   1388 
   1389   /* ... and alignment */
   1390   assert(aligned_OK(chunk2mem(p)));
   1391 
   1392   /* ... and was allocated at front of an available chunk */
   1393   assert(prev_inuse(p));
   1394 }
   1395 
   1396 #define check_free_chunk(P)  do_check_free_chunk(P)
   1397 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
   1398 #define check_chunk(P) do_check_chunk(P)
   1399 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
   1400 #else
   1401 #define check_free_chunk(P) 
   1402 #define check_inuse_chunk(P)
   1403 #define check_chunk(P)
   1404 #define check_malloced_chunk(P,N)
   1405 #endif
   1406 
   1407 /* 
   1408   Macro-based internal utilities
   1409 */
   1410 
   1411 /*  
   1412   Linking chunks in bin lists.
   1413   Call these only with variables, not arbitrary expressions, as arguments.
   1414 */
   1415 
   1416 /* 
   1417   Place chunk p of size s in its bin, in size order,
   1418   putting it ahead of others of same size.
   1419 */
   1420 
   1421 #define frontlink(P, S, IDX, BK, FD)                                          \
   1422 {                                                                             \
   1423   if (S < MAX_SMALLBIN_SIZE)                                                  \
   1424   {                                                                           \
   1425     IDX = smallbin_index(S);                                                  \
   1426     mark_binblock(IDX);                                                       \
   1427     BK = bin_at(IDX);                                                         \
   1428     FD = BK->fd;                                                              \
   1429     P->bk = BK;                                                               \
   1430     P->fd = FD;                                                               \
   1431     FD->bk = BK->fd = P;                                                      \
   1432   }                                                                           \
   1433   else                                                                        \
   1434   {                                                                           \
   1435     IDX = bin_index(S);                                                       \
   1436     BK = bin_at(IDX);                                                         \
   1437     FD = BK->fd;                                                              \
   1438     if (FD == BK) mark_binblock(IDX);                                         \
   1439     else                                                                      \
   1440     {                                                                         \
   1441       while (FD != BK && S < chunksize(FD)) FD = FD->fd;                      \
   1442       BK = FD->bk;                                                            \
   1443     }                                                                         \
   1444     P->bk = BK;                                                               \
   1445     P->fd = FD;                                                               \
   1446     FD->bk = BK->fd = P;                                                      \
   1447   }                                                                           \
   1448 }
   1449 
   1450 /* take a chunk off a list */
   1451 
   1452 #define unlink(P, BK, FD)                                                     \
   1453 {                                                                             \
   1454   BK = P->bk;                                                                 \
   1455   FD = P->fd;                                                                 \
   1456   FD->bk = BK;                                                                \
   1457   BK->fd = FD;                                                                \
   1458 }                                                                             \
   1459 
   1460 /* Place p as the last remainder */
   1461 
   1462 #define link_last_remainder(P)                                                \
   1463 {                                                                             \
   1464   last_remainder->fd = last_remainder->bk =  P;                               \
   1465   P->fd = P->bk = last_remainder;                                             \
   1466 }
   1467 
   1468 /* Clear the last_remainder bin */
   1469 
   1470 #define clear_last_remainder \
   1471   (last_remainder->fd = last_remainder->bk = last_remainder)
   1472 
   1473 /* Routines dealing with mmap(). */
   1474 
   1475 #if HAVE_MMAP
   1476 
   1477 #if __STD_C
   1478 static mchunkptr mmap_chunk(size_t size)
   1479 #else
   1480 static mchunkptr mmap_chunk(size) size_t size;
   1481 #endif
   1482 {
   1483   size_t page_mask = malloc_getpagesize - 1;
   1484   mchunkptr p;
   1485 
   1486 #ifndef MAP_ANONYMOUS
   1487   static int fd = -1;
   1488 #endif
   1489 
   1490   if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
   1491 
   1492   /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
   1493    * there is no following chunk whose prev_size field could be used.
   1494    */
   1495   size = (size + SIZE_SZ + page_mask) & ~page_mask;
   1496 
   1497 #ifdef MAP_ANONYMOUS
   1498   p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
   1499 		      MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
   1500 #else /* !MAP_ANONYMOUS */
   1501   if (fd < 0) 
   1502   {
   1503     fd = open("/dev/zero", O_RDWR);
   1504     if(fd < 0) return 0;
   1505   }
   1506   p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
   1507 #endif
   1508 
   1509   if(p == (mchunkptr)-1) return 0;
   1510 
   1511   n_mmaps++;
   1512   if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
   1513   
   1514   /* We demand that eight bytes into a page must be 8-byte aligned. */
   1515   assert(aligned_OK(chunk2mem(p)));
   1516 
   1517   /* The offset to the start of the mmapped region is stored
   1518    * in the prev_size field of the chunk; normally it is zero,
   1519    * but that can be changed in memalign().
   1520    */
   1521   p->prev_size = 0;
   1522   set_head(p, size|IS_MMAPPED);
   1523   
   1524   mmapped_mem += size;
   1525   if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 
   1526     max_mmapped_mem = mmapped_mem;
   1527   if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 
   1528     max_total_mem = mmapped_mem + sbrked_mem;
   1529   return p;
   1530 }
   1531 
   1532 #if __STD_C
   1533 static void munmap_chunk(mchunkptr p)
   1534 #else
   1535 static void munmap_chunk(p) mchunkptr p;
   1536 #endif
   1537 {
   1538   INTERNAL_SIZE_T size = chunksize(p);
   1539   int ret;
   1540 
   1541   assert (chunk_is_mmapped(p));
   1542   assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
   1543   assert((n_mmaps > 0));
   1544   assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
   1545 
   1546   n_mmaps--;
   1547   mmapped_mem -= (size + p->prev_size);
   1548 
   1549   ret = munmap((char *)p - p->prev_size, size + p->prev_size);
   1550 
   1551   /* munmap returns non-zero on failure */
   1552   assert(ret == 0);
   1553 }
   1554 
   1555 #if HAVE_MREMAP
   1556 
   1557 #if __STD_C
   1558 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
   1559 #else
   1560 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
   1561 #endif
   1562 {
   1563   size_t page_mask = malloc_getpagesize - 1;
   1564   INTERNAL_SIZE_T offset = p->prev_size;
   1565   INTERNAL_SIZE_T size = chunksize(p);
   1566   char *cp;
   1567 
   1568   assert (chunk_is_mmapped(p));
   1569   assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
   1570   assert((n_mmaps > 0));
   1571   assert(((size + offset) & (malloc_getpagesize-1)) == 0);
   1572 
   1573   /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
   1574   new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
   1575 
   1576   cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
   1577 
   1578   if (cp == (char *)-1) return 0;
   1579 
   1580   p = (mchunkptr)(cp + offset);
   1581 
   1582   assert(aligned_OK(chunk2mem(p)));
   1583 
   1584   assert((p->prev_size == offset));
   1585   set_head(p, (new_size - offset)|IS_MMAPPED);
   1586 
   1587   mmapped_mem -= size + offset;
   1588   mmapped_mem += new_size;
   1589   if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 
   1590     max_mmapped_mem = mmapped_mem;
   1591   if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
   1592     max_total_mem = mmapped_mem + sbrked_mem;
   1593   return p;
   1594 }
   1595 
   1596 #endif /* HAVE_MREMAP */
   1597 
   1598 #endif /* HAVE_MMAP */
   1599 
   1600 /* 
   1601   Extend the top-most chunk by obtaining memory from system.
   1602   Main interface to sbrk (but see also malloc_trim).
   1603 */
   1604 
   1605 #if __STD_C
   1606 static void malloc_extend_top(INTERNAL_SIZE_T nb)
   1607 #else
   1608 static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
   1609 #endif
   1610 {
   1611   char*     brk;                  /* return value from sbrk */
   1612   INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
   1613   INTERNAL_SIZE_T correction;     /* bytes for 2nd sbrk call */
   1614   char*     new_brk;              /* return of 2nd sbrk call */
   1615   INTERNAL_SIZE_T top_size;       /* new size of top chunk */
   1616 
   1617   mchunkptr old_top     = top;  /* Record state of old top */
   1618   INTERNAL_SIZE_T old_top_size = chunksize(old_top);
   1619   char*     old_end      = (char*)(chunk_at_offset(old_top, old_top_size));
   1620 
   1621   /* Pad request with top_pad plus minimal overhead */
   1622   
   1623   INTERNAL_SIZE_T    sbrk_size     = nb + top_pad + MINSIZE;
   1624   unsigned long pagesz    = malloc_getpagesize;
   1625 
   1626   /* If not the first time through, round to preserve page boundary */
   1627   /* Otherwise, we need to correct to a page size below anyway. */
   1628   /* (We also correct below if an intervening foreign sbrk call.) */
   1629 
   1630   if (sbrk_base != (char*)(-1))
   1631     sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
   1632 
   1633   brk = (char*)(MORECORE (sbrk_size));
   1634 
   1635   /* Fail if sbrk failed or if a foreign sbrk call killed our space */
   1636   if (brk == (char*)(MORECORE_FAILURE) || 
   1637       (brk < old_end && old_top != initial_top))
   1638     return;     
   1639 
   1640   sbrked_mem += sbrk_size;
   1641 
   1642   if (brk == old_end) /* can just add bytes to current top */
   1643   {
   1644     top_size = sbrk_size + old_top_size;
   1645     set_head(top, top_size | PREV_INUSE);
   1646   }
   1647   else
   1648   {
   1649     if (sbrk_base == (char*)(-1))  /* First time through. Record base */
   1650       sbrk_base = brk;
   1651     else  /* Someone else called sbrk().  Count those bytes as sbrked_mem. */
   1652       sbrked_mem += brk - (char*)old_end;
   1653 
   1654     /* Guarantee alignment of first new chunk made from this space */
   1655     front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
   1656     if (front_misalign > 0) 
   1657     {
   1658       correction = (MALLOC_ALIGNMENT) - front_misalign;
   1659       brk += correction;
   1660     }
   1661     else
   1662       correction = 0;
   1663 
   1664     /* Guarantee the next brk will be at a page boundary */
   1665     correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
   1666 
   1667     /* Allocate correction */
   1668     new_brk = (char*)(MORECORE (correction));
   1669     if (new_brk == (char*)(MORECORE_FAILURE)) return; 
   1670 
   1671     sbrked_mem += correction;
   1672 
   1673     top = (mchunkptr)brk;
   1674     top_size = new_brk - brk + correction;
   1675     set_head(top, top_size | PREV_INUSE);
   1676 
   1677     if (old_top != initial_top)
   1678     {
   1679       /* There must have been an intervening foreign sbrk call. */
   1680       /* A double fencepost is necessary to prevent consolidation */
   1681 
   1682       /* If not enough space to do this, then user did something very wrong */
   1683       if (old_top_size < MINSIZE) 
   1684       {
   1685         set_head(top, PREV_INUSE); /* will force null return from malloc */
   1686         return;
   1687       }
   1688 
   1689       /* Also keep size a multiple of MALLOC_ALIGNMENT */
   1690       old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
   1691       set_head_size(old_top, old_top_size);
   1692       chunk_at_offset(old_top, old_top_size          )->size =
   1693         SIZE_SZ|PREV_INUSE;
   1694       chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
   1695         SIZE_SZ|PREV_INUSE;
   1696       /* If possible, release the rest. */
   1697       if (old_top_size >= MINSIZE) 
   1698         fREe(chunk2mem(old_top));
   1699     }
   1700   }
   1701 
   1702   if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem) 
   1703     max_sbrked_mem = sbrked_mem;
   1704   if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 
   1705     max_total_mem = mmapped_mem + sbrked_mem;
   1706 
   1707   /* We always land on a page boundary */
   1708   assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
   1709 }
   1710 
   1711 /* Main public routines */
   1712 
   1713 /*
   1714   Malloc Algorthim:
   1715 
   1716     The requested size is first converted into a usable form, `nb'.
   1717     This currently means to add 4 bytes overhead plus possibly more to
   1718     obtain 8-byte alignment and/or to obtain a size of at least
   1719     MINSIZE (currently 16 bytes), the smallest allocatable size.
   1720     (All fits are considered `exact' if they are within MINSIZE bytes.)
   1721 
   1722     From there, the first successful of the following steps is taken:
   1723 
   1724       1. The bin corresponding to the request size is scanned, and if
   1725          a chunk of exactly the right size is found, it is taken.
   1726 
   1727       2. The most recently remaindered chunk is used if it is big
   1728          enough.  This is a form of (roving) first fit, used only in
   1729          the absence of exact fits. Runs of consecutive requests use
   1730          the remainder of the chunk used for the previous such request
   1731          whenever possible. This limited use of a first-fit style
   1732          allocation strategy tends to give contiguous chunks
   1733          coextensive lifetimes, which improves locality and can reduce
   1734          fragmentation in the long run.
   1735 
   1736       3. Other bins are scanned in increasing size order, using a
   1737          chunk big enough to fulfill the request, and splitting off
   1738          any remainder.  This search is strictly by best-fit; i.e.,
   1739          the smallest (with ties going to approximately the least
   1740          recently used) chunk that fits is selected.
   1741 
   1742       4. If large enough, the chunk bordering the end of memory
   1743          (`top') is split off. (This use of `top' is in accord with
   1744          the best-fit search rule.  In effect, `top' is treated as
   1745          larger (and thus less well fitting) than any other available
   1746          chunk since it can be extended to be as large as necessary
   1747          (up to system limitations).
   1748 
   1749       5. If the request size meets the mmap threshold and the
   1750          system supports mmap, and there are few enough currently
   1751          allocated mmapped regions, and a call to mmap succeeds,
   1752          the request is allocated via direct memory mapping.
   1753 
   1754       6. Otherwise, the top of memory is extended by
   1755          obtaining more space from the system (normally using sbrk,
   1756          but definable to anything else via the MORECORE macro).
   1757          Memory is gathered from the system (in system page-sized
   1758          units) in a way that allows chunks obtained across different
   1759          sbrk calls to be consolidated, but does not require
   1760          contiguous memory. Thus, it should be safe to intersperse
   1761          mallocs with other sbrk calls.
   1762 
   1763 
   1764       All allocations are made from the the `lowest' part of any found
   1765       chunk. (The implementation invariant is that prev_inuse is
   1766       always true of any allocated chunk; i.e., that each allocated
   1767       chunk borders either a previously allocated and still in-use chunk,
   1768       or the base of its memory arena.)
   1769 */
   1770 
   1771 #if __STD_C
   1772 Void_t* mALLOc(size_t bytes)
   1773 #else
   1774 Void_t* mALLOc(bytes) size_t bytes;
   1775 #endif
   1776 {
   1777   mchunkptr victim;                  /* inspected/selected chunk */
   1778   INTERNAL_SIZE_T victim_size;       /* its size */
   1779   int       idx;                     /* index for bin traversal */
   1780   mbinptr   bin;                     /* associated bin */
   1781   mchunkptr remainder;               /* remainder from a split */
   1782   long      remainder_size;          /* its size */
   1783   int       remainder_index;         /* its bin index */
   1784   unsigned long block;               /* block traverser bit */
   1785   int       startidx;                /* first bin of a traversed block */
   1786   mchunkptr fwd;                     /* misc temp for linking */
   1787   mchunkptr bck;                     /* misc temp for linking */
   1788   mbinptr q;                         /* misc temp */
   1789 
   1790   INTERNAL_SIZE_T nb  = request2size(bytes);  /* padded request size; */
   1791 
   1792   /* Check for exact match in a bin */
   1793 
   1794   if (is_small_request(nb))  /* Faster version for small requests */
   1795   {
   1796     idx = smallbin_index(nb); 
   1797 
   1798     /* No traversal or size check necessary for small bins.  */
   1799 
   1800     q = bin_at(idx);
   1801     victim = last(q);
   1802 
   1803     /* Also scan the next one, since it would have a remainder < MINSIZE */
   1804     if (victim == q)
   1805     {
   1806       q = next_bin(q);
   1807       victim = last(q);
   1808     }
   1809     if (victim != q)
   1810     {
   1811       victim_size = chunksize(victim);
   1812       unlink(victim, bck, fwd);
   1813       set_inuse_bit_at_offset(victim, victim_size);
   1814       check_malloced_chunk(victim, nb);
   1815       return chunk2mem(victim);
   1816     }
   1817 
   1818     idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
   1819 
   1820   }
   1821   else
   1822   {
   1823     idx = bin_index(nb);
   1824     bin = bin_at(idx);
   1825 
   1826     for (victim = last(bin); victim != bin; victim = victim->bk)
   1827     {
   1828       victim_size = chunksize(victim);
   1829       remainder_size = victim_size - nb;
   1830       
   1831       if (remainder_size >= (long)MINSIZE) /* too big */
   1832       {
   1833         --idx; /* adjust to rescan below after checking last remainder */
   1834         break;   
   1835       }
   1836 
   1837       else if (remainder_size >= 0) /* exact fit */
   1838       {
   1839         unlink(victim, bck, fwd);
   1840         set_inuse_bit_at_offset(victim, victim_size);
   1841         check_malloced_chunk(victim, nb);
   1842         return chunk2mem(victim);
   1843       }
   1844     }
   1845     ++idx; 
   1846   }
   1847 
   1848   /* Try to use the last split-off remainder */
   1849 
   1850   if ( (victim = last_remainder->fd) != last_remainder)
   1851   {
   1852     victim_size = chunksize(victim);
   1853     remainder_size = victim_size - nb;
   1854 
   1855     if (remainder_size >= (long)MINSIZE) /* re-split */
   1856     {
   1857       remainder = chunk_at_offset(victim, nb);
   1858       set_head(victim, nb | PREV_INUSE);
   1859       link_last_remainder(remainder);
   1860       set_head(remainder, remainder_size | PREV_INUSE);
   1861       set_foot(remainder, remainder_size);
   1862       check_malloced_chunk(victim, nb);
   1863       return chunk2mem(victim);
   1864     }
   1865 
   1866     clear_last_remainder;
   1867 
   1868     if (remainder_size >= 0)  /* exhaust */
   1869     {
   1870       set_inuse_bit_at_offset(victim, victim_size);
   1871       check_malloced_chunk(victim, nb);
   1872       return chunk2mem(victim);
   1873     }
   1874 
   1875     /* Else place in bin */
   1876 
   1877     frontlink(victim, victim_size, remainder_index, bck, fwd);
   1878   }
   1879 
   1880   /* 
   1881      If there are any possibly nonempty big-enough blocks, 
   1882      search for best fitting chunk by scanning bins in blockwidth units.
   1883   */
   1884 
   1885   if ( (block = idx2binblock(idx)) <= binblocks)  
   1886   {
   1887 
   1888     /* Get to the first marked block */
   1889 
   1890     if ( (block & binblocks) == 0) 
   1891     {
   1892       /* force to an even block boundary */
   1893       idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
   1894       block <<= 1;
   1895       while ((block & binblocks) == 0)
   1896       {
   1897         idx += BINBLOCKWIDTH;
   1898         block <<= 1;
   1899       }
   1900     }
   1901       
   1902     /* For each possibly nonempty block ... */
   1903     for (;;)  
   1904     {
   1905       startidx = idx;          /* (track incomplete blocks) */
   1906       q = bin = bin_at(idx);
   1907 
   1908       /* For each bin in this block ... */
   1909       do
   1910       {
   1911         /* Find and use first big enough chunk ... */
   1912 
   1913         for (victim = last(bin); victim != bin; victim = victim->bk)
   1914         {
   1915           victim_size = chunksize(victim);
   1916           remainder_size = victim_size - nb;
   1917 
   1918           if (remainder_size >= (long)MINSIZE) /* split */
   1919           {
   1920             remainder = chunk_at_offset(victim, nb);
   1921             set_head(victim, nb | PREV_INUSE);
   1922             unlink(victim, bck, fwd);
   1923             link_last_remainder(remainder);
   1924             set_head(remainder, remainder_size | PREV_INUSE);
   1925             set_foot(remainder, remainder_size);
   1926             check_malloced_chunk(victim, nb);
   1927             return chunk2mem(victim);
   1928           }
   1929 
   1930           else if (remainder_size >= 0)  /* take */
   1931           {
   1932             set_inuse_bit_at_offset(victim, victim_size);
   1933             unlink(victim, bck, fwd);
   1934             check_malloced_chunk(victim, nb);
   1935             return chunk2mem(victim);
   1936           }
   1937 
   1938         }
   1939 
   1940        bin = next_bin(bin);
   1941 
   1942       } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
   1943 
   1944       /* Clear out the block bit. */
   1945 
   1946       do   /* Possibly backtrack to try to clear a partial block */
   1947       {
   1948         if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
   1949         {
   1950           binblocks &= ~block;
   1951           break;
   1952         }
   1953         --startidx;
   1954        q = prev_bin(q);
   1955       } while (first(q) == q);
   1956 
   1957       /* Get to the next possibly nonempty block */
   1958 
   1959       if ( (block <<= 1) <= binblocks && (block != 0) ) 
   1960       {
   1961         while ((block & binblocks) == 0)
   1962         {
   1963           idx += BINBLOCKWIDTH;
   1964           block <<= 1;
   1965         }
   1966       }
   1967       else
   1968         break;
   1969     }
   1970   }
   1971 
   1972 
   1973   /* Try to use top chunk */
   1974 
   1975   /* Require that there be a remainder, ensuring top always exists  */
   1976   if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
   1977   {
   1978 
   1979 #if HAVE_MMAP
   1980     /* If big and would otherwise need to extend, try to use mmap instead */
   1981     if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
   1982         (victim = mmap_chunk(nb)) != 0)
   1983       return chunk2mem(victim);
   1984 #endif
   1985 
   1986     /* Try to extend */
   1987     malloc_extend_top(nb);
   1988     if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
   1989 			{
   1990       return 0; /* propagate failure */
   1991 			}
   1992   }
   1993 
   1994   victim = top;
   1995   set_head(victim, nb | PREV_INUSE);
   1996   top = chunk_at_offset(victim, nb);
   1997   set_head(top, remainder_size | PREV_INUSE);
   1998   check_malloced_chunk(victim, nb);
   1999 
   2000   return chunk2mem(victim);
   2001 }
   2002 
   2003 /*
   2004 
   2005   free() algorithm :
   2006 
   2007     cases:
   2008 
   2009        1. free(0) has no effect.  
   2010 
   2011        2. If the chunk was allocated via mmap, it is release via munmap().
   2012 
   2013        3. If a returned chunk borders the current high end of memory,
   2014           it is consolidated into the top, and if the total unused
   2015           topmost memory exceeds the trim threshold, malloc_trim is
   2016           called.
   2017 
   2018        4. Other chunks are consolidated as they arrive, and
   2019           placed in corresponding bins. (This includes the case of
   2020           consolidating with the current `last_remainder').
   2021 
   2022 */
   2023 
   2024 
   2025 #if __STD_C
   2026 void fREe(Void_t* mem)
   2027 #else
   2028 void fREe(mem) Void_t* mem;
   2029 #endif
   2030 {
   2031   mchunkptr p;         /* chunk corresponding to mem */
   2032   INTERNAL_SIZE_T hd;  /* its head field */
   2033   INTERNAL_SIZE_T sz;  /* its size */
   2034   int       idx;       /* its bin index */
   2035   mchunkptr next;      /* next contiguous chunk */
   2036   INTERNAL_SIZE_T nextsz; /* its size */
   2037   INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
   2038   mchunkptr bck;       /* misc temp for linking */
   2039   mchunkptr fwd;       /* misc temp for linking */
   2040   int       islr;      /* track whether merging with last_remainder */
   2041 
   2042   if (mem == 0)                              /* free(0) has no effect */
   2043     return;
   2044 
   2045   p = mem2chunk(mem);
   2046   hd = p->size;
   2047 
   2048 #if HAVE_MMAP
   2049   if (hd & IS_MMAPPED)                       /* release mmapped memory. */
   2050   {
   2051     munmap_chunk(p);
   2052     return;
   2053   }
   2054 #endif
   2055   
   2056   check_inuse_chunk(p);
   2057   
   2058   sz = hd & ~PREV_INUSE;
   2059   next = chunk_at_offset(p, sz);
   2060   nextsz = chunksize(next);
   2061   
   2062   if (next == top)                            /* merge with top */
   2063   {
   2064     sz += nextsz;
   2065 
   2066     if (!(hd & PREV_INUSE))                    /* consolidate backward */
   2067     {
   2068       prevsz = p->prev_size;
   2069       p = chunk_at_offset(p, -prevsz);
   2070       sz += prevsz;
   2071       unlink(p, bck, fwd);
   2072     }
   2073 
   2074     set_head(p, sz | PREV_INUSE);
   2075     top = p;
   2076     if ((unsigned long)(sz) >= (unsigned long)trim_threshold) 
   2077       malloc_trim(top_pad); 
   2078     return;
   2079   }
   2080 
   2081   set_head(next, nextsz);                    /* clear inuse bit */
   2082 
   2083   islr = 0;
   2084 
   2085   if (!(hd & PREV_INUSE))                    /* consolidate backward */
   2086   {
   2087     prevsz = p->prev_size;
   2088     p = chunk_at_offset(p, -prevsz);
   2089     sz += prevsz;
   2090     
   2091     if (p->fd == last_remainder)             /* keep as last_remainder */
   2092       islr = 1;
   2093     else
   2094       unlink(p, bck, fwd);
   2095   }
   2096   
   2097   if (!(inuse_bit_at_offset(next, nextsz)))   /* consolidate forward */
   2098   {
   2099     sz += nextsz;
   2100     
   2101     if (!islr && next->fd == last_remainder)  /* re-insert last_remainder */
   2102     {
   2103       islr = 1;
   2104       link_last_remainder(p);   
   2105     }
   2106     else
   2107       unlink(next, bck, fwd);
   2108   }
   2109 
   2110 
   2111   set_head(p, sz | PREV_INUSE);
   2112   set_foot(p, sz);
   2113   if (!islr)
   2114     frontlink(p, sz, idx, bck, fwd);  
   2115 }
   2116 
   2117 /*
   2118 
   2119   Realloc algorithm:
   2120 
   2121     Chunks that were obtained via mmap cannot be extended or shrunk
   2122     unless HAVE_MREMAP is defined, in which case mremap is used.
   2123     Otherwise, if their reallocation is for additional space, they are
   2124     copied.  If for less, they are just left alone.
   2125 
   2126     Otherwise, if the reallocation is for additional space, and the
   2127     chunk can be extended, it is, else a malloc-copy-free sequence is
   2128     taken.  There are several different ways that a chunk could be
   2129     extended. All are tried:
   2130 
   2131        * Extending forward into following adjacent free chunk.
   2132        * Shifting backwards, joining preceding adjacent space
   2133        * Both shifting backwards and extending forward.
   2134        * Extending into newly sbrked space
   2135 
   2136     Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
   2137     size argument of zero (re)allocates a minimum-sized chunk.
   2138 
   2139     If the reallocation is for less space, and the new request is for
   2140     a `small' (<512 bytes) size, then the newly unused space is lopped
   2141     off and freed.
   2142 
   2143     The old unix realloc convention of allowing the last-free'd chunk
   2144     to be used as an argument to realloc is no longer supported.
   2145     I don't know of any programs still relying on this feature,
   2146     and allowing it would also allow too many other incorrect 
   2147     usages of realloc to be sensible.
   2148 
   2149 
   2150 */
   2151 
   2152 
   2153 #if __STD_C
   2154 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
   2155 #else
   2156 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
   2157 #endif
   2158 {
   2159   INTERNAL_SIZE_T    nb;      /* padded request size */
   2160 
   2161   mchunkptr oldp;             /* chunk corresponding to oldmem */
   2162   INTERNAL_SIZE_T    oldsize; /* its size */
   2163 
   2164   mchunkptr newp;             /* chunk to return */
   2165   INTERNAL_SIZE_T    newsize; /* its size */
   2166   Void_t*   newmem;           /* corresponding user mem */
   2167 
   2168   mchunkptr next;             /* next contiguous chunk after oldp */
   2169   INTERNAL_SIZE_T  nextsize;  /* its size */
   2170 
   2171   mchunkptr prev;             /* previous contiguous chunk before oldp */
   2172   INTERNAL_SIZE_T  prevsize;  /* its size */
   2173 
   2174   mchunkptr remainder;        /* holds split off extra space from newp */
   2175   INTERNAL_SIZE_T  remainder_size;   /* its size */
   2176 
   2177   mchunkptr bck;              /* misc temp for linking */
   2178   mchunkptr fwd;              /* misc temp for linking */
   2179 
   2180 #ifdef REALLOC_ZERO_BYTES_FREES
   2181   if (bytes == 0) { fREe(oldmem); return 0; }
   2182 #endif
   2183 
   2184 
   2185   /* realloc of null is supposed to be same as malloc */
   2186   if (oldmem == 0) return mALLOc(bytes);
   2187 
   2188   newp    = oldp    = mem2chunk(oldmem);
   2189   newsize = oldsize = chunksize(oldp);
   2190 
   2191 
   2192   nb = request2size(bytes);
   2193 
   2194 #if HAVE_MMAP
   2195   if (chunk_is_mmapped(oldp)) 
   2196   {
   2197 #if HAVE_MREMAP
   2198     newp = mremap_chunk(oldp, nb);
   2199     if(newp) return chunk2mem(newp);
   2200 #endif
   2201     /* Note the extra SIZE_SZ overhead. */
   2202     if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
   2203     /* Must alloc, copy, free. */
   2204     newmem = mALLOc(bytes);
   2205     if (newmem == 0) return 0; /* propagate failure */
   2206     MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
   2207     munmap_chunk(oldp);
   2208     return newmem;
   2209   }
   2210 #endif
   2211 
   2212   check_inuse_chunk(oldp);
   2213 
   2214   if ((long)(oldsize) < (long)(nb))  
   2215   {
   2216 
   2217     /* Try expanding forward */
   2218 
   2219     next = chunk_at_offset(oldp, oldsize);
   2220     if (next == top || !inuse(next)) 
   2221     {
   2222       nextsize = chunksize(next);
   2223 
   2224       /* Forward into top only if a remainder */
   2225       if (next == top)
   2226       {
   2227         if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
   2228         {
   2229           newsize += nextsize;
   2230           top = chunk_at_offset(oldp, nb);
   2231           set_head(top, (newsize - nb) | PREV_INUSE);
   2232           set_head_size(oldp, nb);
   2233           return chunk2mem(oldp);
   2234         }
   2235       }
   2236 
   2237       /* Forward into next chunk */
   2238       else if (((long)(nextsize + newsize) >= (long)(nb)))
   2239       { 
   2240         unlink(next, bck, fwd);
   2241         newsize  += nextsize;
   2242         goto split;
   2243       }
   2244     }
   2245     else
   2246     {
   2247       next = 0;
   2248       nextsize = 0;
   2249     }
   2250 
   2251     /* Try shifting backwards. */
   2252 
   2253     if (!prev_inuse(oldp))
   2254     {
   2255       prev = prev_chunk(oldp);
   2256       prevsize = chunksize(prev);
   2257 
   2258       /* try forward + backward first to save a later consolidation */
   2259 
   2260       if (next != 0)
   2261       {
   2262         /* into top */
   2263         if (next == top)
   2264         {
   2265           if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
   2266           {
   2267             unlink(prev, bck, fwd);
   2268             newp = prev;
   2269             newsize += prevsize + nextsize;
   2270             newmem = chunk2mem(newp);
   2271             MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
   2272             top = chunk_at_offset(newp, nb);
   2273             set_head(top, (newsize - nb) | PREV_INUSE);
   2274             set_head_size(newp, nb);
   2275             return newmem;
   2276           }
   2277         }
   2278 
   2279         /* into next chunk */
   2280         else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
   2281         {
   2282           unlink(next, bck, fwd);
   2283           unlink(prev, bck, fwd);
   2284           newp = prev;
   2285           newsize += nextsize + prevsize;
   2286           newmem = chunk2mem(newp);
   2287           MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
   2288           goto split;
   2289         }
   2290       }
   2291       
   2292       /* backward only */
   2293       if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)  
   2294       {
   2295         unlink(prev, bck, fwd);
   2296         newp = prev;
   2297         newsize += prevsize;
   2298         newmem = chunk2mem(newp);
   2299         MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
   2300         goto split;
   2301       }
   2302     }
   2303 
   2304     /* Must allocate */
   2305 
   2306     newmem = mALLOc (bytes);
   2307 
   2308     if (newmem == 0)  /* propagate failure */
   2309       return 0; 
   2310 
   2311     /* Avoid copy if newp is next chunk after oldp. */
   2312     /* (This can only happen when new chunk is sbrk'ed.) */
   2313 
   2314     if ( (newp = mem2chunk(newmem)) == next_chunk(oldp)) 
   2315     {
   2316       newsize += chunksize(newp);
   2317       newp = oldp;
   2318       goto split;
   2319     }
   2320 
   2321     /* Otherwise copy, free, and exit */
   2322     MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
   2323     fREe(oldmem);
   2324     return newmem;
   2325   }
   2326 
   2327 
   2328  split:  /* split off extra room in old or expanded chunk */
   2329 
   2330   if (newsize - nb >= MINSIZE) /* split off remainder */
   2331   {
   2332     remainder = chunk_at_offset(newp, nb);
   2333     remainder_size = newsize - nb;
   2334     set_head_size(newp, nb);
   2335     set_head(remainder, remainder_size | PREV_INUSE);
   2336     set_inuse_bit_at_offset(remainder, remainder_size);
   2337     fREe(chunk2mem(remainder)); /* let free() deal with it */
   2338   }
   2339   else
   2340   {
   2341     set_head_size(newp, newsize);
   2342     set_inuse_bit_at_offset(newp, newsize);
   2343   }
   2344 
   2345   check_inuse_chunk(newp);
   2346   return chunk2mem(newp);
   2347 }
   2348 
   2349 /*
   2350   memalign algorithm:
   2351 
   2352     memalign requests more than enough space from malloc, finds a spot
   2353     within that chunk that meets the alignment request, and then
   2354     possibly frees the leading and trailing space. 
   2355 
   2356     The alignment argument must be a power of two. This property is not
   2357     checked by memalign, so misuse may result in random runtime errors.
   2358 
   2359     8-byte alignment is guaranteed by normal malloc calls, so don't
   2360     bother calling memalign with an argument of 8 or less.
   2361 
   2362     Overreliance on memalign is a sure way to fragment space.
   2363 */
   2364 
   2365 
   2366 #if __STD_C
   2367 Void_t* mEMALIGn(size_t alignment, size_t bytes)
   2368 #else
   2369 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
   2370 #endif
   2371 {
   2372   INTERNAL_SIZE_T    nb;      /* padded  request size */
   2373   char*     m;                /* memory returned by malloc call */
   2374   mchunkptr p;                /* corresponding chunk */
   2375   char*     brk;              /* alignment point within p */
   2376   mchunkptr newp;             /* chunk to return */
   2377   INTERNAL_SIZE_T  newsize;   /* its size */
   2378   INTERNAL_SIZE_T  leadsize;  /* leading space befor alignment point */
   2379   mchunkptr remainder;        /* spare room at end to split off */
   2380   long      remainder_size;   /* its size */
   2381 
   2382   /* If need less alignment than we give anyway, just relay to malloc */
   2383 
   2384   if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
   2385 
   2386   /* Otherwise, ensure that it is at least a minimum chunk size */
   2387   
   2388   if (alignment <  MINSIZE) alignment = MINSIZE;
   2389 
   2390   /* Call malloc with worst case padding to hit alignment. */
   2391 
   2392   nb = request2size(bytes);
   2393   m  = (char*)(mALLOc(nb + alignment + MINSIZE));
   2394 
   2395   if (m == 0) return 0; /* propagate failure */
   2396 
   2397   p = mem2chunk(m);
   2398 
   2399   if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
   2400   {
   2401 #if HAVE_MMAP
   2402     if(chunk_is_mmapped(p))
   2403       return chunk2mem(p); /* nothing more to do */
   2404 #endif
   2405   }
   2406   else /* misaligned */
   2407   {
   2408     /* 
   2409       Find an aligned spot inside chunk.
   2410       Since we need to give back leading space in a chunk of at 
   2411       least MINSIZE, if the first calculation places us at
   2412       a spot with less than MINSIZE leader, we can move to the
   2413       next aligned spot -- we've allocated enough total room so that
   2414       this is always possible.
   2415     */
   2416 
   2417     brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
   2418     if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
   2419 
   2420     newp = (mchunkptr)brk;
   2421     leadsize = brk - (char*)(p);
   2422     newsize = chunksize(p) - leadsize;
   2423 
   2424 #if HAVE_MMAP
   2425     if(chunk_is_mmapped(p)) 
   2426     {
   2427       newp->prev_size = p->prev_size + leadsize;
   2428       set_head(newp, newsize|IS_MMAPPED);
   2429       return chunk2mem(newp);
   2430     }
   2431 #endif
   2432 
   2433     /* give back leader, use the rest */
   2434 
   2435     set_head(newp, newsize | PREV_INUSE);
   2436     set_inuse_bit_at_offset(newp, newsize);
   2437     set_head_size(p, leadsize);
   2438     fREe(chunk2mem(p));
   2439     p = newp;
   2440 
   2441     assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
   2442   }
   2443 
   2444   /* Also give back spare room at the end */
   2445 
   2446   remainder_size = chunksize(p) - nb;
   2447 
   2448   if (remainder_size >= (long)MINSIZE)
   2449   {
   2450     remainder = chunk_at_offset(p, nb);
   2451     set_head(remainder, remainder_size | PREV_INUSE);
   2452     set_head_size(p, nb);
   2453     fREe(chunk2mem(remainder));
   2454   }
   2455 
   2456   check_inuse_chunk(p);
   2457   return chunk2mem(p);
   2458 
   2459 }
   2460 
   2461 /*
   2462     valloc just invokes memalign with alignment argument equal
   2463     to the page size of the system (or as near to this as can
   2464     be figured out from all the includes/defines above.)
   2465 */
   2466 
   2467 #if __STD_C
   2468 Void_t* vALLOc(size_t bytes)
   2469 #else
   2470 Void_t* vALLOc(bytes) size_t bytes;
   2471 #endif
   2472 {
   2473   return mEMALIGn (malloc_getpagesize, bytes);
   2474 }
   2475 
   2476 /* 
   2477   pvalloc just invokes valloc for the nearest pagesize
   2478   that will accommodate request
   2479 */
   2480 
   2481 #if __STD_C
   2482 Void_t* pvALLOc(size_t bytes)
   2483 #else
   2484 Void_t* pvALLOc(bytes) size_t bytes;
   2485 #endif
   2486 {
   2487   size_t pagesize = malloc_getpagesize;
   2488   return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
   2489 }
   2490 
   2491 /*
   2492   calloc calls malloc, then zeroes out the allocated chunk.
   2493 */
   2494 
   2495 #if __STD_C
   2496 Void_t* cALLOc(size_t n, size_t elem_size)
   2497 #else
   2498 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
   2499 #endif
   2500 {
   2501   mchunkptr p;
   2502   INTERNAL_SIZE_T csz;
   2503 
   2504   INTERNAL_SIZE_T sz = n * elem_size;
   2505 
   2506   /* check if expand_top called, in which case don't need to clear */
   2507 #if MORECORE_CLEARS
   2508   mchunkptr oldtop = top;
   2509   INTERNAL_SIZE_T oldtopsize = chunksize(top);
   2510 #endif
   2511   Void_t* mem = mALLOc (sz);
   2512 
   2513   if (mem == 0) 
   2514     return 0;
   2515   else
   2516   {
   2517     p = mem2chunk(mem);
   2518 
   2519     /* Two optional cases in which clearing not necessary */
   2520 
   2521 #if HAVE_MMAP
   2522     if (chunk_is_mmapped(p)) return mem;
   2523 #endif
   2524 
   2525     csz = chunksize(p);
   2526 
   2527 #if MORECORE_CLEARS
   2528     if (p == oldtop && csz > oldtopsize) 
   2529     {
   2530       /* clear only the bytes from non-freshly-sbrked memory */
   2531       csz = oldtopsize;
   2532     }
   2533 #endif
   2534 
   2535     MALLOC_ZERO(mem, csz - SIZE_SZ);
   2536     return mem;
   2537   }
   2538 }
   2539 
   2540 /*
   2541   cfree just calls free. It is needed/defined on some systems
   2542   that pair it with calloc, presumably for odd historical reasons.
   2543 */
   2544 
   2545 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
   2546 #if __STD_C
   2547 void cfree(Void_t *mem)
   2548 #else
   2549 void cfree(mem) Void_t *mem;
   2550 #endif
   2551 {
   2552   fREe(mem);
   2553 }
   2554 #endif
   2555 
   2556 /*
   2557     Malloc_trim gives memory back to the system (via negative
   2558     arguments to sbrk) if there is unused memory at the `high' end of
   2559     the malloc pool. You can call this after freeing large blocks of
   2560     memory to potentially reduce the system-level memory requirements
   2561     of a program. However, it cannot guarantee to reduce memory. Under
   2562     some allocation patterns, some large free blocks of memory will be
   2563     locked between two used chunks, so they cannot be given back to
   2564     the system.
   2565 
   2566     The `pad' argument to malloc_trim represents the amount of free
   2567     trailing space to leave untrimmed. If this argument is zero,
   2568     only the minimum amount of memory to maintain internal data
   2569     structures will be left (one page or less). Non-zero arguments
   2570     can be supplied to maintain enough trailing space to service
   2571     future expected allocations without having to re-obtain memory
   2572     from the system.
   2573 
   2574     Malloc_trim returns 1 if it actually released any memory, else 0.
   2575 */
   2576 
   2577 #if __STD_C
   2578 int malloc_trim(size_t pad)
   2579 #else
   2580 int malloc_trim(pad) size_t pad;
   2581 #endif
   2582 {
   2583   long  top_size;        /* Amount of top-most memory */
   2584   long  extra;           /* Amount to release */
   2585   char* current_brk;     /* address returned by pre-check sbrk call */
   2586   char* new_brk;         /* address returned by negative sbrk call */
   2587 
   2588   unsigned long pagesz = malloc_getpagesize;
   2589 
   2590   top_size = chunksize(top);
   2591   extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
   2592 
   2593   if (extra < (long)pagesz)  /* Not enough memory to release */
   2594     return 0;
   2595 
   2596   else
   2597   {
   2598     /* Test to make sure no one else called sbrk */
   2599     current_brk = (char*)(MORECORE (0));
   2600     if (current_brk != (char*)(top) + top_size)
   2601       return 0;     /* Apparently we don't own memory; must fail */
   2602 
   2603     else
   2604     {
   2605       new_brk = (char*)(MORECORE (-extra));
   2606       
   2607       if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
   2608       {
   2609         /* Try to figure out what we have */
   2610         current_brk = (char*)(MORECORE (0));
   2611         top_size = current_brk - (char*)top;
   2612         if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
   2613         {
   2614           sbrked_mem = current_brk - sbrk_base;
   2615           set_head(top, top_size | PREV_INUSE);
   2616         }
   2617         check_chunk(top);
   2618         return 0; 
   2619       }
   2620 
   2621       else
   2622       {
   2623         /* Success. Adjust top accordingly. */
   2624         set_head(top, (top_size - extra) | PREV_INUSE);
   2625         sbrked_mem -= extra;
   2626         check_chunk(top);
   2627         return 1;
   2628       }
   2629     }
   2630   }
   2631 }
   2632 
   2633 /*
   2634   malloc_usable_size:
   2635 
   2636     This routine tells you how many bytes you can actually use in an
   2637     allocated chunk, which may be more than you requested (although
   2638     often not). You can use this many bytes without worrying about
   2639     overwriting other allocated objects. Not a particularly great
   2640     programming practice, but still sometimes useful.
   2641 */
   2642 
   2643 #if __STD_C
   2644 size_t malloc_usable_size(Void_t* mem)
   2645 #else
   2646 size_t malloc_usable_size(mem) Void_t* mem;
   2647 #endif
   2648 {
   2649   mchunkptr p;
   2650   if (mem == 0)
   2651     return 0;
   2652   else
   2653   {
   2654     p = mem2chunk(mem);
   2655     if(!chunk_is_mmapped(p))
   2656     {
   2657       if (!inuse(p)) return 0;
   2658       check_inuse_chunk(p);
   2659       return chunksize(p) - SIZE_SZ;
   2660     }
   2661     return chunksize(p) - 2*SIZE_SZ;
   2662   }
   2663 }
   2664 
   2665 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
   2666 
   2667 static void malloc_update_mallinfo() 
   2668 {
   2669   int i;
   2670   mbinptr b;
   2671   mchunkptr p;
   2672 #if DEBUG
   2673   mchunkptr q;
   2674 #endif
   2675 
   2676   INTERNAL_SIZE_T avail = chunksize(top);
   2677   int   navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
   2678 
   2679   for (i = 1; i < NAV; ++i)
   2680   {
   2681     b = bin_at(i);
   2682     for (p = last(b); p != b; p = p->bk) 
   2683     {
   2684 #if DEBUG
   2685       check_free_chunk(p);
   2686       for (q = next_chunk(p); 
   2687            q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE; 
   2688            q = next_chunk(q))
   2689         check_inuse_chunk(q);
   2690 #endif
   2691       avail += chunksize(p);
   2692       navail++;
   2693     }
   2694   }
   2695 
   2696   current_mallinfo.ordblks = navail;
   2697   current_mallinfo.uordblks = sbrked_mem - avail;
   2698   current_mallinfo.fordblks = avail;
   2699   current_mallinfo.hblks = n_mmaps;
   2700   current_mallinfo.hblkhd = mmapped_mem;
   2701   current_mallinfo.keepcost = chunksize(top);
   2702 
   2703 }
   2704 
   2705 /*
   2706 
   2707   malloc_stats:
   2708 
   2709     Prints on stderr the amount of space obtain from the system (both
   2710     via sbrk and mmap), the maximum amount (which may be more than
   2711     current if malloc_trim and/or munmap got called), the maximum
   2712     number of simultaneous mmap regions used, and the current number
   2713     of bytes allocated via malloc (or realloc, etc) but not yet
   2714     freed. (Note that this is the number of bytes allocated, not the
   2715     number requested. It will be larger than the number requested
   2716     because of alignment and bookkeeping overhead.)
   2717 
   2718 */
   2719 
   2720 void malloc_stats()
   2721 {
   2722   malloc_update_mallinfo();
   2723 /*
   2724 	This commented out since we don't have a printf(1) in userspace yet
   2725 
   2726   printf("max system bytes = %10u\n", 
   2727           (unsigned int)(max_total_mem));
   2728   printf("system bytes     = %10u\n", 
   2729           (unsigned int)(sbrked_mem + mmapped_mem));
   2730   printf("in use bytes     = %10u\n", 
   2731           (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
   2732 #if HAVE_MMAP
   2733   printf("max mmap regions = %10u\n", 
   2734           (unsigned int)max_n_mmaps);
   2735 #endif
   2736 */
   2737 }
   2738 
   2739 /*
   2740   mallinfo returns a copy of updated current mallinfo.
   2741 */
   2742 
   2743 struct mallinfo mALLINFo()
   2744 {
   2745   malloc_update_mallinfo();
   2746   return current_mallinfo;
   2747 }
   2748 
   2749 /*
   2750   mallopt:
   2751 
   2752     mallopt is the general SVID/XPG interface to tunable parameters.
   2753     The format is to provide a (parameter-number, parameter-value) pair.
   2754     mallopt then sets the corresponding parameter to the argument
   2755     value if it can (i.e., so long as the value is meaningful),
   2756     and returns 1 if successful else 0.
   2757 
   2758     See descriptions of tunable parameters above.
   2759 
   2760 */
   2761 
   2762 #if __STD_C
   2763 int mALLOPt(int param_number, int value)
   2764 #else
   2765 int mALLOPt(param_number, value) int param_number; int value;
   2766 #endif
   2767 {
   2768   switch(param_number) 
   2769   {
   2770     case M_TRIM_THRESHOLD:
   2771       trim_threshold = value; return 1; 
   2772     case M_TOP_PAD:
   2773       top_pad = value; return 1; 
   2774     case M_MMAP_THRESHOLD:
   2775       mmap_threshold = value; return 1;
   2776     case M_MMAP_MAX:
   2777 #if HAVE_MMAP
   2778       n_mmaps_max = value; return 1;
   2779 #else
   2780       if (value != 0) return 0; else  n_mmaps_max = value; return 1;
   2781 #endif
   2782 
   2783     default:
   2784       return 0;
   2785   }
   2786 }
   2787 
   2788 /*
   2789 
   2790 History:
   2791 
   2792     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
   2793       * Fixed ordering problem with boundary-stamping
   2794 
   2795     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
   2796       * Added pvalloc, as recommended by H.J. Liu
   2797       * Added 64bit pointer support mainly from Wolfram Gloger
   2798       * Added anonymously donated WIN32 sbrk emulation
   2799       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
   2800       * malloc_extend_top: fix mask error that caused wastage after
   2801         foreign sbrks
   2802       * Add linux mremap support code from HJ Liu
   2803    
   2804     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
   2805       * Integrated most documentation with the code.
   2806       * Add support for mmap, with help from 
   2807         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
   2808       * Use last_remainder in more cases.
   2809       * Pack bins using idea from  colin@nyx10.cs.du.edu
   2810       * Use ordered bins instead of best-fit threshhold
   2811       * Eliminate block-local decls to simplify tracing and debugging.
   2812       * Support another case of realloc via move into top
   2813       * Fix error occuring when initial sbrk_base not word-aligned.  
   2814       * Rely on page size for units instead of SBRK_UNIT to
   2815         avoid surprises about sbrk alignment conventions.
   2816       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
   2817         (raymond@es.ele.tue.nl) for the suggestion. 
   2818       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
   2819       * More precautions for cases where other routines call sbrk,
   2820         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
   2821       * Added macros etc., allowing use in linux libc from
   2822         H.J. Lu (hjl@gnu.ai.mit.edu)
   2823       * Inverted this history list
   2824 
   2825     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
   2826       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
   2827       * Removed all preallocation code since under current scheme
   2828         the work required to undo bad preallocations exceeds
   2829         the work saved in good cases for most test programs.
   2830       * No longer use return list or unconsolidated bins since
   2831         no scheme using them consistently outperforms those that don't
   2832         given above changes.
   2833       * Use best fit for very large chunks to prevent some worst-cases.
   2834       * Added some support for debugging
   2835 
   2836     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
   2837       * Removed footers when chunks are in use. Thanks to
   2838         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
   2839 
   2840     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
   2841       * Added malloc_trim, with help from Wolfram Gloger 
   2842         (wmglo@Dent.MED.Uni-Muenchen.DE).
   2843 
   2844     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
   2845 
   2846     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
   2847       * realloc: try to expand in both directions
   2848       * malloc: swap order of clean-bin strategy;
   2849       * realloc: only conditionally expand backwards
   2850       * Try not to scavenge used bins
   2851       * Use bin counts as a guide to preallocation
   2852       * Occasionally bin return list chunks in first scan
   2853       * Add a few optimizations from colin@nyx10.cs.du.edu
   2854 
   2855     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
   2856       * faster bin computation & slightly different binning
   2857       * merged all consolidations to one part of malloc proper
   2858          (eliminating old malloc_find_space & malloc_clean_bin)
   2859       * Scan 2 returns chunks (not just 1)
   2860       * Propagate failure in realloc if malloc returns 0
   2861       * Add stuff to allow compilation on non-ANSI compilers 
   2862           from kpv@research.att.com
   2863      
   2864     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
   2865       * removed potential for odd address access in prev_chunk
   2866       * removed dependency on getpagesize.h
   2867       * misc cosmetics and a bit more internal documentation
   2868       * anticosmetics: mangled names in macros to evade debugger strangeness
   2869       * tested on sparc, hp-700, dec-mips, rs6000 
   2870           with gcc & native cc (hp, dec only) allowing
   2871           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
   2872 
   2873     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
   2874       * Based loosely on libg++-1.2X malloc. (It retains some of the overall 
   2875          structure of old version,  but most details differ.)
   2876 
   2877 */
   2878