blob: 9e0be482416efc970dd52874a9de114493325f18 [file] [log] [blame]
Heinrich Schuchardt862dd212021-05-29 13:18:00 +02001/* SPDX-License-Identifier: GPL-2.0+ */
wdenk5b1d7132002-11-03 00:07:02 +00002/*
Heinrich Schuchardt862dd212021-05-29 13:18:00 +02003 This code is based on a version of malloc/free/realloc written by Doug Lea and
4 released to the public domain. Send questions/comments/complaints/performance
5 data to dl@cs.oswego.edu
wdenk5b1d7132002-11-03 00:07:02 +00006
7* VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee)
8
9 Note: There may be an updated version of this malloc obtainable at
Heinrich Schuchardt862dd212021-05-29 13:18:00 +020010 http://g.oswego.edu/pub/misc/malloc.c
wdenk57b2d802003-06-27 21:31:46 +000011 Check before installing!
wdenk5b1d7132002-11-03 00:07:02 +000012
13* Why use this malloc?
14
15 This is not the fastest, most space-conserving, most portable, or
16 most tunable malloc ever written. However it is among the fastest
17 while also being among the most space-conserving, portable and tunable.
18 Consistent balance across these factors results in a good general-purpose
19 allocator. For a high-level description, see
20 http://g.oswego.edu/dl/html/malloc.html
21
22* Synopsis of public routines
23
24 (Much fuller descriptions are contained in the program documentation below.)
25
26 malloc(size_t n);
27 Return a pointer to a newly allocated chunk of at least n bytes, or null
28 if no space is available.
29 free(Void_t* p);
30 Release the chunk of memory pointed to by p, or no effect if p is null.
31 realloc(Void_t* p, size_t n);
32 Return a pointer to a chunk of size n that contains the same data
33 as does chunk p up to the minimum of (n, p's size) bytes, or null
34 if no space is available. The returned pointer may or may not be
35 the same as p. If p is null, equivalent to malloc. Unless the
36 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
37 size argument of zero (re)allocates a minimum-sized chunk.
38 memalign(size_t alignment, size_t n);
39 Return a pointer to a newly allocated chunk of n bytes, aligned
40 in accord with the alignment argument, which must be a power of
41 two.
42 valloc(size_t n);
43 Equivalent to memalign(pagesize, n), where pagesize is the page
44 size of the system (or as near to this as can be figured out from
45 all the includes/defines below.)
46 pvalloc(size_t n);
47 Equivalent to valloc(minimum-page-that-holds(n)), that is,
48 round up n to nearest pagesize.
49 calloc(size_t unit, size_t quantity);
50 Returns a pointer to quantity * unit bytes, with all locations
51 set to zero.
52 cfree(Void_t* p);
53 Equivalent to free(p).
54 malloc_trim(size_t pad);
55 Release all but pad bytes of freed top-most memory back
56 to the system. Return 1 if successful, else 0.
57 malloc_usable_size(Void_t* p);
58 Report the number usable allocated bytes associated with allocated
59 chunk p. This may or may not report more bytes than were requested,
60 due to alignment and minimum size constraints.
61 malloc_stats();
62 Prints brief summary statistics on stderr.
63 mallinfo()
64 Returns (by copy) a struct containing various summary statistics.
65 mallopt(int parameter_number, int parameter_value)
66 Changes one of the tunable parameters described below. Returns
67 1 if successful in changing the parameter, else 0.
68
69* Vital statistics:
70
71 Alignment: 8-byte
72 8 byte alignment is currently hardwired into the design. This
73 seems to suffice for all current machines and C compilers.
74
75 Assumed pointer representation: 4 or 8 bytes
76 Code for 8-byte pointers is untested by me but has worked
77 reliably by Wolfram Gloger, who contributed most of the
78 changes supporting this.
79
80 Assumed size_t representation: 4 or 8 bytes
81 Note that size_t is allowed to be 4 bytes even if pointers are 8.
82
83 Minimum overhead per allocated chunk: 4 or 8 bytes
84 Each malloced chunk has a hidden overhead of 4 bytes holding size
85 and status information.
86
87 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
wdenk57b2d802003-06-27 21:31:46 +000088 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
wdenk5b1d7132002-11-03 00:07:02 +000089
90 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
91 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
92 needed; 4 (8) for a trailing size field
93 and 8 (16) bytes for free list pointers. Thus, the minimum
94 allocatable size is 16/24/32 bytes.
95
96 Even a request for zero bytes (i.e., malloc(0)) returns a
97 pointer to something of the minimum allocatable size.
98
99 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
wdenk57b2d802003-06-27 21:31:46 +0000100 8-byte size_t: 2^63 - 16 bytes
wdenk5b1d7132002-11-03 00:07:02 +0000101
102 It is assumed that (possibly signed) size_t bit values suffice to
103 represent chunk sizes. `Possibly signed' is due to the fact
104 that `size_t' may be defined on a system as either a signed or
105 an unsigned type. To be conservative, values that would appear
106 as negative numbers are avoided.
107 Requests for sizes with a negative sign bit when the request
108 size is treaded as a long will return null.
109
110 Maximum overhead wastage per allocated chunk: normally 15 bytes
111
112 Alignnment demands, plus the minimum allocatable size restriction
113 make the normal worst-case wastage 15 bytes (i.e., up to 15
114 more bytes will be allocated than were requested in malloc), with
115 two exceptions:
wdenk57b2d802003-06-27 21:31:46 +0000116 1. Because requests for zero bytes allocate non-zero space,
117 the worst case wastage for a request of zero bytes is 24 bytes.
118 2. For requests >= mmap_threshold that are serviced via
119 mmap(), the worst case wastage is 8 bytes plus the remainder
120 from a system page (the minimal mmap unit); typically 4096 bytes.
wdenk5b1d7132002-11-03 00:07:02 +0000121
122* Limitations
123
124 Here are some features that are NOT currently supported
125
126 * No user-definable hooks for callbacks and the like.
127 * No automated mechanism for fully checking that all accesses
128 to malloced memory stay within their bounds.
129 * No support for compaction.
130
131* Synopsis of compile-time options:
132
133 People have reported using previous versions of this malloc on all
134 versions of Unix, sometimes by tweaking some of the defines
135 below. It has been tested most extensively on Solaris and
136 Linux. It is also reported to work on WIN32 platforms.
137 People have also reported adapting this malloc for use in
138 stand-alone embedded systems.
139
140 The implementation is in straight, hand-tuned ANSI C. Among other
141 consequences, it uses a lot of macros. Because of this, to be at
142 all usable, this code should be compiled using an optimizing compiler
143 (for example gcc -O2) that can simplify expressions and control
144 paths.
145
146 __STD_C (default: derived from C compiler defines)
147 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
148 a C compiler sufficiently close to ANSI to get away with it.
149 DEBUG (default: NOT defined)
150 Define to enable debugging. Adds fairly extensive assertion-based
151 checking to help track down memory errors, but noticeably slows down
152 execution.
153 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
154 Define this if you think that realloc(p, 0) should be equivalent
155 to free(p). Otherwise, since malloc returns a unique pointer for
156 malloc(0), so does realloc(p, 0).
157 HAVE_MEMCPY (default: defined)
158 Define if you are not otherwise using ANSI STD C, but still
159 have memcpy and memset in your C library and want to use them.
160 Otherwise, simple internal versions are supplied.
161 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
162 Define as 1 if you want the C library versions of memset and
163 memcpy called in realloc and calloc (otherwise macro versions are used).
164 At least on some platforms, the simple macro versions usually
165 outperform libc versions.
166 HAVE_MMAP (default: defined as 1)
167 Define to non-zero to optionally make malloc() use mmap() to
168 allocate very large blocks.
169 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
170 Define to non-zero to optionally make realloc() use mremap() to
171 reallocate very large blocks.
172 malloc_getpagesize (default: derived from system #includes)
173 Either a constant or routine call returning the system page size.
174 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
175 Optionally define if you are on a system with a /usr/include/malloc.h
176 that declares struct mallinfo. It is not at all necessary to
177 define this even if you do, but will ensure consistency.
178 INTERNAL_SIZE_T (default: size_t)
179 Define to a 32-bit type (probably `unsigned int') if you are on a
180 64-bit machine, yet do not want or need to allow malloc requests of
181 greater than 2^31 to be handled. This saves space, especially for
182 very small chunks.
183 INTERNAL_LINUX_C_LIB (default: NOT defined)
184 Defined only when compiled as part of Linux libc.
185 Also note that there is some odd internal name-mangling via defines
186 (for example, internally, `malloc' is named `mALLOc') needed
187 when compiling in this case. These look funny but don't otherwise
188 affect anything.
189 WIN32 (default: undefined)
190 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
191 LACKS_UNISTD_H (default: undefined if not WIN32)
192 Define this if your system does not have a <unistd.h>.
193 LACKS_SYS_PARAM_H (default: undefined if not WIN32)
194 Define this if your system does not have a <sys/param.h>.
195 MORECORE (default: sbrk)
196 The name of the routine to call to obtain more memory from the system.
197 MORECORE_FAILURE (default: -1)
198 The value returned upon failure of MORECORE.
199 MORECORE_CLEARS (default 1)
York Sun4a598092013-04-01 11:29:11 -0700200 true (1) if the routine mapped to MORECORE zeroes out memory (which
wdenk5b1d7132002-11-03 00:07:02 +0000201 holds for sbrk).
202 DEFAULT_TRIM_THRESHOLD
203 DEFAULT_TOP_PAD
204 DEFAULT_MMAP_THRESHOLD
205 DEFAULT_MMAP_MAX
206 Default values of tunable parameters (described in detail below)
207 controlling interaction with host system routines (sbrk, mmap, etc).
208 These values may also be changed dynamically via mallopt(). The
209 preset defaults are those that give best performance for typical
210 programs/systems.
211 USE_DL_PREFIX (default: undefined)
212 Prefix all public routines with the string 'dl'. Useful to
213 quickly avoid procedure declaration conflicts and linker symbol
214 conflicts with existing memory allocation routines.
215
wdenk5b1d7132002-11-03 00:07:02 +0000216*/
217
218
Jean-Christophe PLAGNIOL-VILLARDd93b1d32009-06-13 12:55:37 +0200219#ifndef __MALLOC_H__
220#define __MALLOC_H__
wdenk5b1d7132002-11-03 00:07:02 +0000221
222/* Preliminaries */
223
224#ifndef __STD_C
225#ifdef __STDC__
226#define __STD_C 1
227#else
228#if __cplusplus
229#define __STD_C 1
230#else
231#define __STD_C 0
232#endif /*__cplusplus*/
233#endif /*__STDC__*/
234#endif /*__STD_C*/
235
236#ifndef Void_t
237#if (__STD_C || defined(WIN32))
238#define Void_t void
239#else
240#define Void_t char
241#endif
242#endif /*Void_t*/
243
244#if __STD_C
245#include <linux/stddef.h> /* for size_t */
246#else
247#include <sys/types.h>
248#endif /* __STD_C */
249
250#ifdef __cplusplus
251extern "C" {
252#endif
253
254#if 0 /* not for U-Boot */
255#include <stdio.h> /* needed for malloc_stats */
256#endif
257
wdenk5b1d7132002-11-03 00:07:02 +0000258/*
259 Compile-time options
260*/
261
wdenk5b1d7132002-11-03 00:07:02 +0000262/*
263 Debugging:
264
265 Because freed chunks may be overwritten with link fields, this
266 malloc will often die when freed memory is overwritten by user
267 programs. This can be very effective (albeit in an annoying way)
268 in helping track down dangling pointers.
269
270 If you compile with -DDEBUG, a number of assertion checks are
271 enabled that will catch more memory errors. You probably won't be
272 able to make much sense of the actual assertion errors, but they
273 should help you locate incorrectly overwritten memory. The
274 checking is fairly extensive, and will slow down execution
275 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
276 attempt to check every non-mmapped allocated and free chunk in the
277 course of computing the summmaries. (By nature, mmapped regions
278 cannot be checked very much automatically.)
279
280 Setting DEBUG may also be helpful if you are trying to modify
281 this code. The assertions in the check routines spell out in more
282 detail the assumptions and invariants underlying the algorithms.
283
284*/
285
wdenk5b1d7132002-11-03 00:07:02 +0000286/*
287 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
288 of chunk sizes. On a 64-bit machine, you can reduce malloc
289 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
290 at the expense of not being able to handle requests greater than
291 2^31. This limitation is hardly ever a concern; you are encouraged
292 to set this. However, the default version is the same as size_t.
293*/
294
295#ifndef INTERNAL_SIZE_T
296#define INTERNAL_SIZE_T size_t
297#endif
298
299/*
300 REALLOC_ZERO_BYTES_FREES should be set if a call to
301 realloc with zero bytes should be the same as a call to free.
302 Some people think it should. Otherwise, since this malloc
303 returns a unique pointer for malloc(0), so does realloc(p, 0).
304*/
305
wdenk5b1d7132002-11-03 00:07:02 +0000306/* #define REALLOC_ZERO_BYTES_FREES */
307
wdenk5b1d7132002-11-03 00:07:02 +0000308/*
309 WIN32 causes an emulation of sbrk to be compiled in
310 mmap-based options are not currently supported in WIN32.
311*/
312
313/* #define WIN32 */
314#ifdef WIN32
315#define MORECORE wsbrk
316#define HAVE_MMAP 0
317
318#define LACKS_UNISTD_H
319#define LACKS_SYS_PARAM_H
320
321/*
322 Include 'windows.h' to get the necessary declarations for the
323 Microsoft Visual C++ data structures and routines used in the 'sbrk'
324 emulation.
325
326 Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
327 Visual C++ header files are included.
328*/
329#define WIN32_LEAN_AND_MEAN
330#include <windows.h>
331#endif
332
wdenk5b1d7132002-11-03 00:07:02 +0000333/*
334 HAVE_MEMCPY should be defined if you are not otherwise using
335 ANSI STD C, but still have memcpy and memset in your C library
336 and want to use them in calloc and realloc. Otherwise simple
337 macro versions are defined here.
338
339 USE_MEMCPY should be defined as 1 if you actually want to
340 have memset and memcpy called. People report that the macro
341 versions are often enough faster than libc versions on many
342 systems that it is better to use them.
343
344*/
345
346#define HAVE_MEMCPY
347
348#ifndef USE_MEMCPY
349#ifdef HAVE_MEMCPY
350#define USE_MEMCPY 1
351#else
352#define USE_MEMCPY 0
353#endif
354#endif
355
356#if (__STD_C || defined(HAVE_MEMCPY))
357
358#if __STD_C
Heinrich Schuchardt1ec0bb72021-02-10 18:59:21 +0100359/* U-Boot defines memset() and memcpy in /include/linux/string.h
wdenk5b1d7132002-11-03 00:07:02 +0000360void* memset(void*, int, size_t);
361void* memcpy(void*, const void*, size_t);
Heinrich Schuchardt1ec0bb72021-02-10 18:59:21 +0100362*/
363#include <linux/string.h>
wdenk5b1d7132002-11-03 00:07:02 +0000364#else
365#ifdef WIN32
wdenk57b2d802003-06-27 21:31:46 +0000366/* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */
367/* 'windows.h' */
wdenk5b1d7132002-11-03 00:07:02 +0000368#else
369Void_t* memset();
370Void_t* memcpy();
371#endif
372#endif
373#endif
374
375#if USE_MEMCPY
376
377/* The following macros are only invoked with (2n+1)-multiples of
378 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
379 for fast inline execution when n is small. */
380
381#define MALLOC_ZERO(charp, nbytes) \
382do { \
383 INTERNAL_SIZE_T mzsz = (nbytes); \
384 if(mzsz <= 9*sizeof(mzsz)) { \
385 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
386 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
wdenk57b2d802003-06-27 21:31:46 +0000387 *mz++ = 0; \
wdenk5b1d7132002-11-03 00:07:02 +0000388 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
wdenk57b2d802003-06-27 21:31:46 +0000389 *mz++ = 0; \
390 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
391 *mz++ = 0; }}} \
392 *mz++ = 0; \
393 *mz++ = 0; \
394 *mz = 0; \
wdenk5b1d7132002-11-03 00:07:02 +0000395 } else memset((charp), 0, mzsz); \
396} while(0)
397
398#define MALLOC_COPY(dest,src,nbytes) \
399do { \
400 INTERNAL_SIZE_T mcsz = (nbytes); \
401 if(mcsz <= 9*sizeof(mcsz)) { \
402 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
403 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
404 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
wdenk57b2d802003-06-27 21:31:46 +0000405 *mcdst++ = *mcsrc++; \
wdenk5b1d7132002-11-03 00:07:02 +0000406 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
wdenk57b2d802003-06-27 21:31:46 +0000407 *mcdst++ = *mcsrc++; \
408 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
409 *mcdst++ = *mcsrc++; }}} \
410 *mcdst++ = *mcsrc++; \
411 *mcdst++ = *mcsrc++; \
412 *mcdst = *mcsrc ; \
wdenk5b1d7132002-11-03 00:07:02 +0000413 } else memcpy(dest, src, mcsz); \
414} while(0)
415
416#else /* !USE_MEMCPY */
417
418/* Use Duff's device for good zeroing/copying performance. */
419
420#define MALLOC_ZERO(charp, nbytes) \
421do { \
422 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
423 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
424 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
425 switch (mctmp) { \
426 case 0: for(;;) { *mzp++ = 0; \
427 case 7: *mzp++ = 0; \
428 case 6: *mzp++ = 0; \
429 case 5: *mzp++ = 0; \
430 case 4: *mzp++ = 0; \
431 case 3: *mzp++ = 0; \
432 case 2: *mzp++ = 0; \
433 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
434 } \
435} while(0)
436
437#define MALLOC_COPY(dest,src,nbytes) \
438do { \
439 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
440 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
441 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
442 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
443 switch (mctmp) { \
444 case 0: for(;;) { *mcdst++ = *mcsrc++; \
445 case 7: *mcdst++ = *mcsrc++; \
446 case 6: *mcdst++ = *mcsrc++; \
447 case 5: *mcdst++ = *mcsrc++; \
448 case 4: *mcdst++ = *mcsrc++; \
449 case 3: *mcdst++ = *mcsrc++; \
450 case 2: *mcdst++ = *mcsrc++; \
451 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
452 } \
453} while(0)
454
455#endif
456
wdenk5b1d7132002-11-03 00:07:02 +0000457/*
458 Define HAVE_MMAP to optionally make malloc() use mmap() to
459 allocate very large blocks. These will be returned to the
460 operating system immediately after a free().
461*/
462
463/***
464#ifndef HAVE_MMAP
465#define HAVE_MMAP 1
466#endif
467***/
468#undef HAVE_MMAP /* Not available for U-Boot */
469
470/*
471 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
472 large blocks. This is currently only possible on Linux with
473 kernel versions newer than 1.3.77.
474*/
475
476/***
477#ifndef HAVE_MREMAP
478#ifdef INTERNAL_LINUX_C_LIB
479#define HAVE_MREMAP 1
480#else
481#define HAVE_MREMAP 0
482#endif
483#endif
484***/
485#undef HAVE_MREMAP /* Not available for U-Boot */
486
Marek Vasute852ef62012-03-29 09:28:15 +0000487#ifdef HAVE_MMAP
wdenk5b1d7132002-11-03 00:07:02 +0000488
489#include <unistd.h>
490#include <fcntl.h>
491#include <sys/mman.h>
492
493#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
494#define MAP_ANONYMOUS MAP_ANON
495#endif
496
497#endif /* HAVE_MMAP */
498
499/*
500 Access to system page size. To the extent possible, this malloc
501 manages memory from the system in page-size units.
502
503 The following mechanics for getpagesize were adapted from
504 bsd/gnu getpagesize.h
505*/
506
507#define LACKS_UNISTD_H /* Shortcut for U-Boot */
508#define malloc_getpagesize 4096
509
510#ifndef LACKS_UNISTD_H
511# include <unistd.h>
512#endif
513
514#ifndef malloc_getpagesize
515# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
516# ifndef _SC_PAGE_SIZE
517# define _SC_PAGE_SIZE _SC_PAGESIZE
518# endif
519# endif
520# ifdef _SC_PAGE_SIZE
521# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
522# else
523# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
524 extern size_t getpagesize();
525# define malloc_getpagesize getpagesize()
526# else
527# ifdef WIN32
528# define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
529# else
530# ifndef LACKS_SYS_PARAM_H
531# include <sys/param.h>
532# endif
533# ifdef EXEC_PAGESIZE
534# define malloc_getpagesize EXEC_PAGESIZE
535# else
536# ifdef NBPG
537# ifndef CLSIZE
538# define malloc_getpagesize NBPG
539# else
540# define malloc_getpagesize (NBPG * CLSIZE)
541# endif
542# else
543# ifdef NBPC
544# define malloc_getpagesize NBPC
545# else
546# ifdef PAGESIZE
547# define malloc_getpagesize PAGESIZE
548# else
549# define malloc_getpagesize (4096) /* just guess */
550# endif
551# endif
552# endif
553# endif
554# endif
555# endif
556# endif
557#endif
558
wdenk5b1d7132002-11-03 00:07:02 +0000559/*
560
561 This version of malloc supports the standard SVID/XPG mallinfo
562 routine that returns a struct containing the same kind of
563 information you can get from malloc_stats. It should work on
564 any SVID/XPG compliant system that has a /usr/include/malloc.h
565 defining struct mallinfo. (If you'd like to install such a thing
566 yourself, cut out the preliminary declarations as described above
567 and below and save them in a malloc.h file. But there's no
568 compelling reason to bother to do this.)
569
570 The main declaration needed is the mallinfo struct that is returned
571 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
572 bunch of fields, most of which are not even meaningful in this
573 version of malloc. Some of these fields are are instead filled by
574 mallinfo() with other numbers that might possibly be of interest.
575
576 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
577 /usr/include/malloc.h file that includes a declaration of struct
578 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
579 version is declared below. These must be precisely the same for
580 mallinfo() to work.
581
582*/
583
584/* #define HAVE_USR_INCLUDE_MALLOC_H */
585
Marek Vasute852ef62012-03-29 09:28:15 +0000586#ifdef HAVE_USR_INCLUDE_MALLOC_H
wdenk5b1d7132002-11-03 00:07:02 +0000587#include "/usr/include/malloc.h"
588#else
589
590/* SVID2/XPG mallinfo structure */
591
592struct mallinfo {
593 int arena; /* total space allocated from system */
594 int ordblks; /* number of non-inuse chunks */
595 int smblks; /* unused -- always zero */
596 int hblks; /* number of mmapped regions */
597 int hblkhd; /* total space in mmapped regions */
598 int usmblks; /* unused -- always zero */
599 int fsmblks; /* unused -- always zero */
600 int uordblks; /* total allocated space */
601 int fordblks; /* total non-inuse space */
602 int keepcost; /* top-most, releasable (via malloc_trim) space */
603};
604
605/* SVID2/XPG mallopt options */
606
607#define M_MXFAST 1 /* UNUSED in this malloc */
608#define M_NLBLKS 2 /* UNUSED in this malloc */
609#define M_GRAIN 3 /* UNUSED in this malloc */
610#define M_KEEP 4 /* UNUSED in this malloc */
611
612#endif
613
614/* mallopt options that actually do something */
615
616#define M_TRIM_THRESHOLD -1
617#define M_TOP_PAD -2
618#define M_MMAP_THRESHOLD -3
619#define M_MMAP_MAX -4
620
wdenk5b1d7132002-11-03 00:07:02 +0000621#ifndef DEFAULT_TRIM_THRESHOLD
622#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
623#endif
624
625/*
626 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
627 to keep before releasing via malloc_trim in free().
628
629 Automatic trimming is mainly useful in long-lived programs.
630 Because trimming via sbrk can be slow on some systems, and can
631 sometimes be wasteful (in cases where programs immediately
632 afterward allocate more large chunks) the value should be high
633 enough so that your overall system performance would improve by
634 releasing.
635
636 The trim threshold and the mmap control parameters (see below)
637 can be traded off with one another. Trimming and mmapping are
638 two different ways of releasing unused memory back to the
639 system. Between these two, it is often possible to keep
640 system-level demands of a long-lived program down to a bare
641 minimum. For example, in one test suite of sessions measuring
642 the XF86 X server on Linux, using a trim threshold of 128K and a
643 mmap threshold of 192K led to near-minimal long term resource
644 consumption.
645
646 If you are using this malloc in a long-lived program, it should
647 pay to experiment with these values. As a rough guide, you
648 might set to a value close to the average size of a process
649 (program) running on your system. Releasing this much memory
650 would allow such a process to run in memory. Generally, it's
651 worth it to tune for trimming rather tham memory mapping when a
652 program undergoes phases where several large chunks are
653 allocated and released in ways that can reuse each other's
654 storage, perhaps mixed with phases where there are no such
655 chunks at all. And in well-behaved long-lived programs,
656 controlling release of large blocks via trimming versus mapping
657 is usually faster.
658
659 However, in most programs, these parameters serve mainly as
660 protection against the system-level effects of carrying around
661 massive amounts of unneeded memory. Since frequent calls to
662 sbrk, mmap, and munmap otherwise degrade performance, the default
663 parameters are set to relatively high values that serve only as
664 safeguards.
665
666 The default trim value is high enough to cause trimming only in
667 fairly extreme (by current memory consumption standards) cases.
668 It must be greater than page size to have any useful effect. To
669 disable trimming completely, you can set to (unsigned long)(-1);
670
wdenk5b1d7132002-11-03 00:07:02 +0000671*/
672
wdenk5b1d7132002-11-03 00:07:02 +0000673#ifndef DEFAULT_TOP_PAD
674#define DEFAULT_TOP_PAD (0)
675#endif
676
677/*
678 M_TOP_PAD is the amount of extra `padding' space to allocate or
679 retain whenever sbrk is called. It is used in two ways internally:
680
681 * When sbrk is called to extend the top of the arena to satisfy
wdenk57b2d802003-06-27 21:31:46 +0000682 a new malloc request, this much padding is added to the sbrk
683 request.
wdenk5b1d7132002-11-03 00:07:02 +0000684
685 * When malloc_trim is called automatically from free(),
wdenk57b2d802003-06-27 21:31:46 +0000686 it is used as the `pad' argument.
wdenk5b1d7132002-11-03 00:07:02 +0000687
688 In both cases, the actual amount of padding is rounded
689 so that the end of the arena is always a system page boundary.
690
691 The main reason for using padding is to avoid calling sbrk so
692 often. Having even a small pad greatly reduces the likelihood
693 that nearly every malloc request during program start-up (or
694 after trimming) will invoke sbrk, which needlessly wastes
695 time.
696
697 Automatic rounding-up to page-size units is normally sufficient
698 to avoid measurable overhead, so the default is 0. However, in
699 systems where sbrk is relatively slow, it can pay to increase
700 this value, at the expense of carrying around more memory than
701 the program needs.
702
703*/
704
wdenk5b1d7132002-11-03 00:07:02 +0000705#ifndef DEFAULT_MMAP_THRESHOLD
706#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
707#endif
708
709/*
710
711 M_MMAP_THRESHOLD is the request size threshold for using mmap()
712 to service a request. Requests of at least this size that cannot
713 be allocated using already-existing space will be serviced via mmap.
714 (If enough normal freed space already exists it is used instead.)
715
716 Using mmap segregates relatively large chunks of memory so that
717 they can be individually obtained and released from the host
718 system. A request serviced through mmap is never reused by any
719 other request (at least not directly; the system may just so
720 happen to remap successive requests to the same locations).
721
722 Segregating space in this way has the benefit that mmapped space
723 can ALWAYS be individually released back to the system, which
724 helps keep the system level memory demands of a long-lived
725 program low. Mapped memory can never become `locked' between
726 other chunks, as can happen with normally allocated chunks, which
727 menas that even trimming via malloc_trim would not release them.
728
729 However, it has the disadvantages that:
730
wdenk57b2d802003-06-27 21:31:46 +0000731 1. The space cannot be reclaimed, consolidated, and then
732 used to service later requests, as happens with normal chunks.
733 2. It can lead to more wastage because of mmap page alignment
734 requirements
735 3. It causes malloc performance to be more dependent on host
736 system memory management support routines which may vary in
737 implementation quality and may impose arbitrary
738 limitations. Generally, servicing a request via normal
739 malloc steps is faster than going through a system's mmap.
wdenk5b1d7132002-11-03 00:07:02 +0000740
741 All together, these considerations should lead you to use mmap
742 only for relatively large requests.
743
wdenk5b1d7132002-11-03 00:07:02 +0000744*/
745
wdenk5b1d7132002-11-03 00:07:02 +0000746#ifndef DEFAULT_MMAP_MAX
Marek Vasute852ef62012-03-29 09:28:15 +0000747#ifdef HAVE_MMAP
wdenk5b1d7132002-11-03 00:07:02 +0000748#define DEFAULT_MMAP_MAX (64)
749#else
750#define DEFAULT_MMAP_MAX (0)
751#endif
752#endif
753
754/*
755 M_MMAP_MAX is the maximum number of requests to simultaneously
756 service using mmap. This parameter exists because:
757
wdenk57b2d802003-06-27 21:31:46 +0000758 1. Some systems have a limited number of internal tables for
759 use by mmap.
760 2. In most systems, overreliance on mmap can degrade overall
761 performance.
762 3. If a program allocates many large regions, it is probably
763 better off using normal sbrk-based allocation routines that
764 can reclaim and reallocate normal heap memory. Using a
765 small value allows transition into this mode after the
766 first few allocations.
wdenk5b1d7132002-11-03 00:07:02 +0000767
768 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
769 the default value is 0, and attempts to set it to non-zero values
770 in mallopt will fail.
771*/
772
wdenk5b1d7132002-11-03 00:07:02 +0000773/*
774 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
775 Useful to quickly avoid procedure declaration conflicts and linker
776 symbol conflicts with existing memory allocation routines.
777
778*/
779
Simon Glass6fbd7392020-02-03 07:35:58 -0700780/*
781 * Rename the U-Boot alloc functions so that sandbox can still use the system
782 * ones
783 */
784#ifdef CONFIG_SANDBOX
785#define USE_DL_PREFIX
786#endif
wdenk5b1d7132002-11-03 00:07:02 +0000787
wdenk5b1d7132002-11-03 00:07:02 +0000788/*
789
790 Special defines for linux libc
791
792 Except when compiled using these special defines for Linux libc
793 using weak aliases, this malloc is NOT designed to work in
794 multithreaded applications. No semaphores or other concurrency
795 control are provided to ensure that multiple malloc or free calls
796 don't run at the same time, which could be disasterous. A single
797 semaphore could be used across malloc, realloc, and free (which is
798 essentially the effect of the linux weak alias approach). It would
799 be hard to obtain finer granularity.
800
801*/
802
wdenk5b1d7132002-11-03 00:07:02 +0000803#ifdef INTERNAL_LINUX_C_LIB
804
805#if __STD_C
806
807Void_t * __default_morecore_init (ptrdiff_t);
808Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
809
810#else
811
812Void_t * __default_morecore_init ();
813Void_t *(*__morecore)() = __default_morecore_init;
814
815#endif
816
817#define MORECORE (*__morecore)
818#define MORECORE_FAILURE 0
819#define MORECORE_CLEARS 1
820
821#else /* INTERNAL_LINUX_C_LIB */
822
823#if __STD_C
824extern Void_t* sbrk(ptrdiff_t);
825#else
826extern Void_t* sbrk();
827#endif
828
829#ifndef MORECORE
830#define MORECORE sbrk
831#endif
832
833#ifndef MORECORE_FAILURE
834#define MORECORE_FAILURE -1
835#endif
836
837#ifndef MORECORE_CLEARS
838#define MORECORE_CLEARS 1
839#endif
840
841#endif /* INTERNAL_LINUX_C_LIB */
842
843#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
844
845#define cALLOc __libc_calloc
846#define fREe __libc_free
847#define mALLOc __libc_malloc
848#define mEMALIGn __libc_memalign
849#define rEALLOc __libc_realloc
850#define vALLOc __libc_valloc
851#define pvALLOc __libc_pvalloc
852#define mALLINFo __libc_mallinfo
853#define mALLOPt __libc_mallopt
854
855#pragma weak calloc = __libc_calloc
856#pragma weak free = __libc_free
857#pragma weak cfree = __libc_free
858#pragma weak malloc = __libc_malloc
859#pragma weak memalign = __libc_memalign
860#pragma weak realloc = __libc_realloc
861#pragma weak valloc = __libc_valloc
862#pragma weak pvalloc = __libc_pvalloc
863#pragma weak mallinfo = __libc_mallinfo
864#pragma weak mallopt = __libc_mallopt
865
866#else
867
Simon Glass947e64e2021-03-15 18:11:19 +1300868void malloc_simple_info(void);
869
Simon Glass1a3e39b2022-09-06 20:27:00 -0600870/**
871 * malloc_enable_testing() - Put malloc() into test mode
872 *
873 * This only works if UNIT_TESTING is enabled
874 *
875 * @max_allocs: return -ENOMEM after max_allocs calls to malloc()
876 */
877void malloc_enable_testing(int max_allocs);
878
879/** malloc_disable_testing() - Put malloc() into normal mode */
880void malloc_disable_testing(void);
881
Hans de Goede9f9df6f2015-09-13 14:45:15 +0200882#if CONFIG_IS_ENABLED(SYS_MALLOC_SIMPLE)
Simon Glass94890462014-11-10 17:16:43 -0700883#define malloc malloc_simple
884#define realloc realloc_simple
885#define memalign memalign_simple
Sean Anderson98011e22022-03-23 14:04:49 -0400886#if IS_ENABLED(CONFIG_VALGRIND)
887#define free free_simple
888#else
Simon Glass94890462014-11-10 17:16:43 -0700889static inline void free(void *ptr) {}
Sean Anderson98011e22022-03-23 14:04:49 -0400890#endif
Simon Glass94890462014-11-10 17:16:43 -0700891void *calloc(size_t nmemb, size_t size);
Simon Glass94890462014-11-10 17:16:43 -0700892void *realloc_simple(void *ptr, size_t size);
893#else
894
895# ifdef USE_DL_PREFIX
896# define cALLOc dlcalloc
897# define fREe dlfree
898# define mALLOc dlmalloc
899# define mEMALIGn dlmemalign
900# define rEALLOc dlrealloc
901# define vALLOc dlvalloc
902# define pvALLOc dlpvalloc
903# define mALLINFo dlmallinfo
904# define mALLOPt dlmallopt
Simon Glass6fbd7392020-02-03 07:35:58 -0700905
906/* Ensure that U-Boot actually uses these too */
907#define calloc dlcalloc
908#define free(ptr) dlfree(ptr)
909#define malloc(x) dlmalloc(x)
910#define memalign dlmemalign
911#define realloc dlrealloc
912#define valloc dlvalloc
913#define pvalloc dlpvalloc
914#define mallinfo() dlmallinfo()
915#define mallopt dlmallopt
916#define malloc_trim dlmalloc_trim
917#define malloc_usable_size dlmalloc_usable_size
918#define malloc_stats dlmalloc_stats
919
Simon Glass94890462014-11-10 17:16:43 -0700920# else /* USE_DL_PREFIX */
921# define cALLOc calloc
922# define fREe free
923# define mALLOc malloc
924# define mEMALIGn memalign
925# define rEALLOc realloc
926# define vALLOc valloc
927# define pvALLOc pvalloc
928# define mALLINFo mallinfo
929# define mALLOPt mallopt
930# endif /* USE_DL_PREFIX */
wdenk5b1d7132002-11-03 00:07:02 +0000931
932#endif
933
Simon Glassd1d087d2015-02-27 22:06:36 -0700934/* Set up pre-relocation malloc() ready for use */
935int initf_malloc(void);
936
wdenk5b1d7132002-11-03 00:07:02 +0000937/* Public routines */
938
Simon Glass94890462014-11-10 17:16:43 -0700939/* Simple versions which can be used when space is tight */
940void *malloc_simple(size_t size);
Andreas Dannenbergecc27402019-03-27 13:17:26 -0500941void *memalign_simple(size_t alignment, size_t bytes);
Simon Glass94890462014-11-10 17:16:43 -0700942
Stephen Warrenc58591a2016-03-05 10:30:52 -0700943#pragma GCC visibility push(hidden)
Simon Glass94890462014-11-10 17:16:43 -0700944# if __STD_C
wdenk5b1d7132002-11-03 00:07:02 +0000945
946Void_t* mALLOc(size_t);
947void fREe(Void_t*);
948Void_t* rEALLOc(Void_t*, size_t);
949Void_t* mEMALIGn(size_t, size_t);
950Void_t* vALLOc(size_t);
951Void_t* pvALLOc(size_t);
952Void_t* cALLOc(size_t, size_t);
953void cfree(Void_t*);
954int malloc_trim(size_t);
955size_t malloc_usable_size(Void_t*);
956void malloc_stats(void);
957int mALLOPt(int, int);
958struct mallinfo mALLINFo(void);
Simon Glass94890462014-11-10 17:16:43 -0700959# else
wdenk5b1d7132002-11-03 00:07:02 +0000960Void_t* mALLOc();
961void fREe();
962Void_t* rEALLOc();
963Void_t* mEMALIGn();
964Void_t* vALLOc();
965Void_t* pvALLOc();
966Void_t* cALLOc();
967void cfree();
968int malloc_trim();
969size_t malloc_usable_size();
970void malloc_stats();
971int mALLOPt();
972struct mallinfo mALLINFo();
Simon Glass94890462014-11-10 17:16:43 -0700973# endif
wdenk5b1d7132002-11-03 00:07:02 +0000974#endif
Stephen Warrenc58591a2016-03-05 10:30:52 -0700975#pragma GCC visibility pop
wdenk5b1d7132002-11-03 00:07:02 +0000976
Peter Tysera78ded62009-08-21 23:05:19 -0500977/*
978 * Begin and End of memory area for malloc(), and current "brk"
979 */
980extern ulong mem_malloc_start;
981extern ulong mem_malloc_end;
982extern ulong mem_malloc_brk;
wdenk5b1d7132002-11-03 00:07:02 +0000983
Simon Glass1cd7dce2024-10-21 10:19:26 +0200984/**
985 * mem_malloc_init() - Set up the malloc() pool
986 *
987 * Sets the region of memory to be used for all future calls to malloc(), etc.
988 *
989 * @start: Start address
990 * @size: Size in bytes
991 */
Peter Tyser781c9b82009-08-21 23:05:21 -0500992void mem_malloc_init(ulong start, ulong size);
993
wdenk5b1d7132002-11-03 00:07:02 +0000994#ifdef __cplusplus
995}; /* end of extern "C" */
996#endif
Jean-Christophe PLAGNIOL-VILLARDd93b1d32009-06-13 12:55:37 +0200997
998#endif /* __MALLOC_H__ */