Posts Tagged ‘recovery’

With InnoDB’s Transportable Tablespaces, Recovering Data from Stranded .ibd Files is a Thing of the Past

Thursday, April 26th, 2012

Being a data recovery specialist and having recovered countless GBs of corrupted, and/or stranded, InnoDB data in my days, I am very happy to hear about the new InnoDB Transportable Tablespaces coming in MySQL 5.6!

Back in the day, if you had a stranded .ibd file (the individual InnoDB data file with –innodb-file-per-table option), you basically had nothing (even though that file contained all of the data). This was because unless you had the original instance that that particular .ibd file (table) originated from, there was no way to load it, import, or dump from it. So it was not of much use, though all the data was *right* there.

Thus I created the method of Recovering an InnoDB table from only an .ibd file (I should note that this was before the InnoDB Recovery Tool had been released, which can also be used to recover data from a stranded .ibd file too).

However, if you’ve used either my method or the InnoDB Recovery Tool for such a job, it can be a bit of work to get the data dumped. For those experienced, it goes much faster. But still, you cannot get any faster than just being able to (roughly) import the individual tablespace right into any running MySQL 5.6 instance. :)

Nice work! :)

Note: Again, I must mention this is only in MySQL 5.6, so if you have a stranded .ibd file you need to recover data from pre-5.6, you’ll either need to use my method or the InnoDB Recovery Tool.

 

MySQL High Availability Manager (MHA) 0.53 has been Released and Get Support for it at SkySQL

Tuesday, January 10th, 2012

I just wanted to let you all know that MHA for MySQL (Master High Availability Manager and tools for MySQL) version 0.53 has been released.

Yoshinori Matsunobu discusses the release in much more detail here:

http://yoshinorimatsunobu.blogspot.com/2012/01/mha-for-mysql-053-released.html

The full MHA 0.53 changelogs are here:

http://code.google.com/p/mysql-master-ha/wiki/ReleaseNotes

MHA 0.53 can be downloaded from here:

http://code.google.com/p/mysql-master-ha/downloads/list

And if you would like support for MHA, simply contact SkySQL:

http://www.skysql.com/how-to-buy

 
 

How to Recover Data using the InnoDB Recovery Tool

Wednesday, April 15th, 2009

As you may or may not know, there is a tool called the InnoDB Recovery Tool which can allow you to recover data from InnoDB tables that you cannot otherwise get the data from.

“This set of tools could be used to check InnoDB tablespaces and to recover data from damaged tablespaces or from dropped/truncated InnoDB tables.”

http://code.google.com/p/innodb-tools/

This is a very handy tool, however, the documentation on how to use it is a bit limited when it comes to actually recovering the data, so I thought I’d post a step-by-step tutorial on how to use this tool.

1. Download the InnoDB Recovery Tool (latest version is 0.3)

2. Unpack the download to the location of your choice

3. Create your table_defs.h file using the create_defs.pl script. Note the below creates a table_defs.h file based on only one table, t1, from the database named test:

cd innodb-recovery-tool-0.3/
./create_defs.pl --user=root --password=mysql --db=test --table=t1 > table_defs.h

4. Copy the newly created table_defs.h fle to the innodb-recovery-tool-0.3/include/ directory.

5. Now is time to build/compile the InnoDB Recovery Tool

cd innodb-recovery-tool-0.3/mysql-source/
./configure
cd ..
make

At this point, you’re almost ready to begin to recover the data. However, let me point out a couple items at this stage. The InnoDB Recovery Tool documentation says you can use the page_parser program to split up the tablespace. Since this page_parser program is now created (after compilation and make), you can use this to break apart the tablespace. However, in my case, I did this, but the page_parser didn’t work as well as I expected. This could be due to the corruption in the tablespace files (ibdata1 and ibdata2). So, I simply tried to recover based off the entire ibdata files. I found that I recovered much more data by just running the recovery against the ibdata files, rather than against the split-up pages. If you opt for this method, then you can skip steps 6, 7, and 8.

6. Should you want to use the page_parser, here is how you run it:

cd innodb-recovery-tool-0.3/
./page_parser -f /home/chris/Desktop/test/ibdata1 /home/chris/Desktop/test/ibdata2 -5

Note that the -f indicates the file(s) to use, and the -5 indicates the ibdata files are from MySQL version 5.0.

7. Should you use the page_parser, you must also load the ibdata file(s) and capture the InnoDB tablespace monitor output. This part is described on the InnoDB Tools how-to.

8. After running the above, you’ll want to capture all of the primary key index positions for each table you want to recover. For instance, you might see something like “0 135″ for the index position of a primary key. This will correspond to the folder named “0-135″ that is created by page_parser.

9. Now you are ready to recover the data for the first table.

(Note that you could create a table_defs.h file based on all of the tables you want to recover. And then you can recover all of the data at once, however, the problem with this is the data is all mis-matched into one big file. So you might have a row for one table followed by a row from another table. If you’re good with sed/awk, this might not be a problem for you, as you can then split it apart. However, it might be easier to create a single table_defs.h file for each table, and then recover the data table-by-table.)

If you want to recover the data based on the page_parser output, then you would use the following command:

./constraints_parser -f /home/chris/Desktop/innodb-recovery-0.3/pages-1239037839/0-135/50-00000050.page -5 -V

Note that the -V is for verbose mode. It is best to use this initially to make sure the data being recovered looks to be correct. Once you’ve verified it looks correct, then simply run the above command without the -V and pipe the output to a text file.

Should you not want to use the page_parser, and just run constraints_parser directly against the ibdata file(s), then issue the following command instead:

./constraints_parser -f /home/chris/Desktop/test/ibdata1 /home/chris/Desktop/test/ibdata2 -5 > output.txt

As for the recovered data itself, note that this data is displayed in a tab-delimited text format that the InnoDB Recovery tool dumps it in (default, not configurable yet).

For instance, here is a sample of data recovered for the t1 table:

t1 128992703 84118144 301989888 224000 33558272 268435456 ""
t1 0 0 34796032 0 530 838926338 ""
t1 1886545261 268455808 256 497 880803840 2949392 ""
t1 1398034253 1953654117 1952672116 2037609569 1952801647 1970173042 ""
t1 402667648 755047491 1431524431 1296388657 825372977 825308725 "5"
t1 536884352 755050563 1431524431 1296388658 842150450 842162531 "t"
t1 671103872 755053635 1431524431 1296388663 926365495 926365495 "77"
t1 524288 0 755056707 1431524431 1296388705 1668573558 ""
t1 524288 0 755059779 1431524431 1296388705 1668573558 ""
t1 524288 0 755062851 1431524431 1296388705 1668573558 ""
t1 525312 0 755065923 1431524431 1296388705 1668573558 ""
t1 524288 0 755068995 1431524431 1296388705 1668573558 ""
t1 524288 0 755072067 1431524431 1296388705 1668573558 ""
t1 524288 0 755075139 1431524431 1296388705 1668573558 ""
t1 525312 0 755078211 1431524431 1296388705 1668573558 ""
t1 524288 0 755081283 1431524431 1296388705 1668573558 ""
t1 524288 0 755084355 1431524431 1296388705 1668573558 ""
t1 524288 0 755047491 1431524431 1296388705 1668573558 ""
t1 524288 0 755047491 1431524431 1296388705 1668573558 ""
t1 0 0 0 0 0 0 ""
t1 0 0 0 0 0 0 ""
t1 0 0 0 0 0 0 ""
t1 0 0 0 0 0 0 ""
t1 0 0 0 0 0 0 ""

You can see each line is pre-pended with the table name (followed by a tab).

You can also see at the end (of the above output) there are a number of empty rows. These are just garbage rows, and can be deleted before you import or afterwards. You’ll see similar such rows in most of the recovered tables’ data as well. However, don’t just delete from the end of the file, as actual data rows are scattered all throughout the files.

I’d also suggest creating some temporary tables using the same CREATE TABLE commands but without any keys or indexes. This will allow you to import the data easier, and then you can clean it up with simple SQL commands. And after that, then you could simply add back your primary keys, indexes, and referential keys.

Should you follow my approach and do this per-table, then you just need to create your new table_defs.h file, re-compile and make, then re-run the constraints_parser just as you did above. Since it is built with the new table_defs.h file, it will now extract the data for this table, so no other changes need to be made.

10. Format the dump file(s) so that it can be imported into the appropriate table(s).

11. Import the data, and clean up the garbage rows.

12. Re-create any needed indexes and/or referential keys.


Period Panties by Period Panteez Menstrual Underwear Menstruation PMS Panty