How to Recover Data using the InnoDB Recovery Tool

As you may or may not know, there is a tool called the InnoDB Recovery Tool which can allow you to recover data from InnoDB tables that you cannot otherwise get the data from.

“This set of tools could be used to check InnoDB tablespaces and to recover data from damaged tablespaces or from dropped/truncated InnoDB tables.”

This is a very handy tool, however, the documentation on how to use it is a bit limited when it comes to actually recovering the data, so I thought I’d post a step-by-step tutorial on how to use this tool.

1. Download the InnoDB Recovery Tool (latest version is 0.3)

2. Unpack the download to the location of your choice

3. Create your table_defs.h file using the script. Note the below creates a table_defs.h file based on only one table, t1, from the database named test:

cd innodb-recovery-tool-0.3/
./ --user=root --password=mysql --db=test --table=t1 > table_defs.h

4. Copy the newly created table_defs.h fle to the innodb-recovery-tool-0.3/include/ directory.

5. Now is time to build/compile the InnoDB Recovery Tool

cd innodb-recovery-tool-0.3/mysql-source/
cd ..

At this point, you’re almost ready to begin to recover the data. However, let me point out a couple items at this stage. The InnoDB Recovery Tool documentation says you can use the page_parser program to split up the tablespace. Since this page_parser program is now created (after compilation and make), you can use this to break apart the tablespace. However, in my case, I did this, but the page_parser didn’t work as well as I expected. This could be due to the corruption in the tablespace files (ibdata1 and ibdata2). So, I simply tried to recover based off the entire ibdata files. I found that I recovered much more data by just running the recovery against the ibdata files, rather than against the split-up pages. If you opt for this method, then you can skip steps 6, 7, and 8.

6. Should you want to use the page_parser, here is how you run it:

cd innodb-recovery-tool-0.3/
./page_parser -f /home/chris/Desktop/test/ibdata1 /home/chris/Desktop/test/ibdata2 -5

Note that the -f indicates the file(s) to use, and the -5 indicates the ibdata files are from MySQL version 5.0.

7. Should you use the page_parser, you must also load the ibdata file(s) and capture the InnoDB tablespace monitor output. This part is described on the InnoDB Tools how-to.

8. After running the above, you’ll want to capture all of the primary key index positions for each table you want to recover. For instance, you might see something like “0 135” for the index position of a primary key. This will correspond to the folder named “0-135” that is created by page_parser.

9. Now you are ready to recover the data for the first table.

(Note that you could create a table_defs.h file based on all of the tables you want to recover. And then you can recover all of the data at once, however, the problem with this is the data is all mis-matched into one big file. So you might have a row for one table followed by a row from another table. If you’re good with sed/awk, this might not be a problem for you, as you can then split it apart. However, it might be easier to create a single table_defs.h file for each table, and then recover the data table-by-table.)

If you want to recover the data based on the page_parser output, then you would use the following command:

./constraints_parser -f /home/chris/Desktop/innodb-recovery-0.3/pages-1239037839/0-135/ -5 -V

Note that the -V is for verbose mode. It is best to use this initially to make sure the data being recovered looks to be correct. Once you’ve verified it looks correct, then simply run the above command without the -V and pipe the output to a text file.

Should you not want to use the page_parser, and just run constraints_parser directly against the ibdata file(s), then issue the following command instead:

./constraints_parser -f /home/chris/Desktop/test/ibdata1 /home/chris/Desktop/test/ibdata2 -5 > output.txt

As for the recovered data itself, note that this data is displayed in a tab-delimited text format that the InnoDB Recovery tool dumps it in (default, not configurable yet).

For instance, here is a sample of data recovered for the t1 table:

t1 128992703 84118144 301989888 224000 33558272 268435456 ""
t1 0 0 34796032 0 530 838926338 ""
t1 1886545261 268455808 256 497 880803840 2949392 ""
t1 1398034253 1953654117 1952672116 2037609569 1952801647 1970173042 ""
t1 402667648 755047491 1431524431 1296388657 825372977 825308725 "5"
t1 536884352 755050563 1431524431 1296388658 842150450 842162531 "t"
t1 671103872 755053635 1431524431 1296388663 926365495 926365495 "77"
t1 524288 0 755056707 1431524431 1296388705 1668573558 ""
t1 524288 0 755059779 1431524431 1296388705 1668573558 ""
t1 524288 0 755062851 1431524431 1296388705 1668573558 ""
t1 525312 0 755065923 1431524431 1296388705 1668573558 ""
t1 524288 0 755068995 1431524431 1296388705 1668573558 ""
t1 524288 0 755072067 1431524431 1296388705 1668573558 ""
t1 524288 0 755075139 1431524431 1296388705 1668573558 ""
t1 525312 0 755078211 1431524431 1296388705 1668573558 ""
t1 524288 0 755081283 1431524431 1296388705 1668573558 ""
t1 524288 0 755084355 1431524431 1296388705 1668573558 ""
t1 524288 0 755047491 1431524431 1296388705 1668573558 ""
t1 524288 0 755047491 1431524431 1296388705 1668573558 ""
t1 0 0 0 0 0 0 ""
t1 0 0 0 0 0 0 ""
t1 0 0 0 0 0 0 ""
t1 0 0 0 0 0 0 ""
t1 0 0 0 0 0 0 ""

You can see each line is pre-pended with the table name (followed by a tab).

You can also see at the end (of the above output) there are a number of empty rows. These are just garbage rows, and can be deleted before you import or afterwards. You’ll see similar such rows in most of the recovered tables’ data as well. However, don’t just delete from the end of the file, as actual data rows are scattered all throughout the files.

I’d also suggest creating some temporary tables using the same CREATE TABLE commands but without any keys or indexes. This will allow you to import the data easier, and then you can clean it up with simple SQL commands. And after that, then you could simply add back your primary keys, indexes, and referential keys.

Should you follow my approach and do this per-table, then you just need to create your new table_defs.h file, re-compile and make, then re-run the constraints_parser just as you did above. Since it is built with the new table_defs.h file, it will now extract the data for this table, so no other changes need to be made.

10. Format the dump file(s) so that it can be imported into the appropriate table(s).

11. Import the data, and clean up the garbage rows.

12. Re-create any needed indexes and/or referential keys.

16 thoughts on “How to Recover Data using the InnoDB Recovery Tool”

  1. Thanks – this is a lifesaver! I accidentally left my MySQL data directory in /tmp, and the .frm file got cleaned up by tmpwatch since it’s never modified. (Naturally, no backups – who backs up /tmp?)

    I’ve written a (very) crude script in Ruby to automate some of this – especially the “recompile for each table”. It’s at, and will probably grow a little as I clean up my garbage rows.

    One thing to watch out for: The garbage rows aren’t necessarily nulls. I have all kinds of true garbage in mine – random strings from other tables, etc. Today’s project: get recover_innodb_tables to weed those out if you use a Rails-style schema (“id” is a small, positive, non-null integer; anything_id must be updated_at < now, etc.)

  2. @Jay – Great to hear this helped you out! Also nice to see you’ve worked on a Ruby script to help automate some of the steps. Thanks for sharing 🙂

  3. Hi!

    This looks like a nice tool for InnoDB recovery. However, when I tried to recover the database, it successfully parsed the ibdata file into pages. But it gave a different output when I executed a ./constraints_parser on one of the pages. I could not decipher what that output meant and hence, posting it over here. It would be of great help if somebody could just take a look at the output I’ve uploaded and explain what this means.

    Link to the uploaded .txt file:

  4. @Nikhil: I’m unable to download the file from that site. If you can post it somewhere else, I’d be happy to take a look.

  5. Pingback: Bob Anderson
  6. If using this method with version 0.4 and you have problems (since this was created using version 0.3), you may want to check out this post from another user, who alerted me to this via a trackback (since I briefly disabled comments due to the amount of spam):

    Thanks Thomas! 🙂

  7. @Gajendra,

    It’s definitely possible to recover data from an ibdata file that originates from a Windows instance. However, you may need to actually perform the recovery on a Linux machine. I mean, it may be possible to build the tool on Windows, I’ve never tried it. But I’m sure it’s much easier to build/configure on Linux, and perform the recovery there. And then just import the dump back into your Windows instance.

  8. Hi,

    i don’t find the solution for my problem, i hope you can say me “it’s possible” and how…

    My MySQL instance crashed because of free disk space fault. I saw in /var/lib/mysql all the files: ibdata1, ib_logfile* and all the folders containing frm files. Well, when i solved the problem and runned successfully the instance, some databases dissapeared. One of those is the most important, and i don’t know how many tables had and their structures. Is there any way for recover the entire lost database (structure and data) only having the ibdata1 file? I’ve read this great tutorial, but in the step 3 “Create your table_defs.h” i don’t know how to do because the instance that’s running don’t have any tables related with this database.

    Please help me. Thanks a lot!

  9. @Paul,

    Unfortunately, without the table defs, I don’t see how this would be possible. I mean, you’d need to examine every bit of page info, as you wouldn’t know the tables or their structure or how many rows each contain. So you’d ultimately have to “guess” each table def, and then find each row of data after that and figure out to which table it belongs.

    Did you try starting mysql with innodb_force_recovery = 6 (or lower if possible) set and then dump all of your data to obtain a backup? This should be possible.

Comments are closed.