rt-serializer - Serialize an RT database to disk
rt-validator --check && rt-serializer
This script is used to write out the entire RT database to disk, for later import into a different RT instance. It requires that the data in the database be self-consistent, in order to do so; please make sure that the database being exported passes validation by rt-validator before attempting to use
While running, it will attempt to estimate the number of remaining objects to be serialized; these estimates are pessimistic, and will be incorrect if
--no-tickets are used.
If the controlling terminal is large enough (more than 25 columns high) and the
gnuplot program is installed, it will also show a textual graph of the queue size over time.
- --directory name
The name of the output directory to write data files to, which should not exist yet; it is a fatal error if it does. Defaults to
./$Organization:Date/, where $Organization is as set in RT_SiteConfig.pm, and Date is today's date.
Remove the output directory before starting.
- --size megabytes
rt-serializerchunks its output into data files which are around 32Mb in size; this option is used to set a different threshold size, in megabytes. Note that this is the threshold after which it rotates to writing a new file, and is as such the lower bound on the size of each output file.
By default, all privileged users are serialized; passing
--no-userslimits it to only those users which are referenced by serialized tickets and history, and are thus necessary for internal consistency.
By default, all groups are serialized; passing
--no-groupslimits it to only system-internal groups, which are needed for internal consistency.
By default, all tickets, including deleted tickets, are serialized; passing
--no-deletedskips deleted tickets during serialization.
No scrips or templates are serialized by default; this option forces all scrips and templates to be serialized.
No ACLs are serialized by default; this option forces all ACLs to be serialized.
Skip serialization of all ticket data.
Takes a list of queue IDs or names separated by commas. When provided, only that set of queues (and the tickets in them) will be serialized.
Takes a list of custom field IDs or names separated by commas. When provided, only that set of custom fields will be serialized.
Replace links to local records which are not being migrated with hyperlinks. The hyperlinks will use the serializing RT's configured URL.
Without this option, such links are instead dropped, and transactions which had updated such links will be replaced with an explanatory message.
Serializes your entire database, creating a clone. This option should be used if you want to migrate your RT database from one database type to another (e.g. MySQL to Postgres). It is an error to combine
--clonewith any option that limits object types serialized. No dependency walking is performed when cloning.
rt-importerwill detect that your serialized data set was generated by a clone.
Will generate an incremenal serialized dataset using the data stored in your IncrementalRecords database table. This assumes that you have created that table and run RT using the Record_Local.pm shim as documented in
- --gc n
Adjust how often the garbage collection sweep is done; lower numbers are more frequent. See "GARBAGE COLLECTION".
- --page n
Adjust how many rows are pulled from the database in a single query. Disable paging by setting this to 0. Defaults to 100.
Keep in mind that rows from RT's Attachments table are the limiting factor when determining page size. You should likely be aiming for 60-75% of your total memory on an otherwise unloaded box.
Do not show graphical progress UI.
Do not show graphical progress UI, but rather log was each row is written out.
rt-serializer maintains a priority queue of objects to serialize, or searches which may result in objects to serialize. When inserting into this queue, it does no checking if the object in question is already in the queue, or if the search will contain any results. These checks are done when the object reaches the front of the queue, or during periodic garbage collection.
During periodic garbage collection, the entire queue is swept for objects which have already been serialized, occur more than once in the queue, and searches which contain no results in the database. This is done to reduce the memory footprint of the serialization process, and is triggered when enough new objects have been placed in the queue. This parameter is tunable via the
--gc parameter, which defaults to running garbage collection every 5,000 objects inserted into the queue; smaller numbers will result in more frequent garbage collection.
The default of 5,000 is roughly tuned based on a database with several thousand tickets, but optimal values will vary wildly depending on database configuration and size. Values as low as 25 have provided speedups with smaller databases; if speed is a factor, experimenting with different
--gc values may be helpful. Note that there are significant boundary condition changes in serialization rate, as the queue empties and fills, causing the time estimates to be rather imprecise near the start and end of the process.
--gc to 0 turns off all garbage collection. Be aware that this will bloat the memory usage of the serializer. Any negative value for
--gc turns off periodic garbage collection and instead objects already serialized or in the queue are checked for at the time they would be inserted.