Skip to content

New implementation of bulk insert#8990

Open
hvlad wants to merge 2 commits intomasterfrom
work/bulk_insert
Open

New implementation of bulk insert#8990
hvlad wants to merge 2 commits intomasterfrom
work/bulk_insert

Conversation

@hvlad
Copy link
Copy Markdown
Member

@hvlad hvlad commented Apr 15, 2026

In Firebird 5 parallel restore was introduced. It contains a "bulk insert" code that allows to lower contention on PP by concurrent writers. Also it makes each writer to use own dedicated DP to fill with records.

With shared metadata cache in v6 that code became broken and parallel restore creates a lot of unused data pages. Instead of fixing that first attempt to have bulk insert ability, I offer a new approach that fixes the issue, have better performance and could be used more widely.

Note, patch doesn't remove old code, it could be done a bit later - after agreement on the new code.

@hvlad hvlad self-assigned this Apr 15, 2026
@dyemanov
Copy link
Copy Markdown
Member

Some details about how the new approach is different from the old one would be appreciated.

@hvlad
Copy link
Copy Markdown
Member Author

hvlad commented Apr 15, 2026

The insert code path now avoids a lot of not necessary steps, such as triggers, validations (except of NOT NULL on field level) index maintenance - all of this is abcent during restore. Also, all records inserted into dedicated in-memory buffer to avoid endless latches on data page buffers. When in-memory buffer is full, its contents is copied into actual DB buffers and go to the disk in usual way. In-memory buffer size start from 1 page and resized to the 8 pages when first 8 pages of relation is filled. Also, blob contens are put into separate in-memory buffer that works in the same way.

The code is put into two main classes:

  • BulkInsert - implements in-memory buffers and code that works with records (mostly analog/copy of some DPM parts) , and
  • BulkInsertNode - replaces StoreNode and contains some further optimizations, such as pre-calculated target descriptors.

@AlexPeshkoff
Copy link
Copy Markdown
Member

AlexPeshkoff commented Apr 15, 2026 via email

@hvlad
Copy link
Copy Markdown
Member Author

hvlad commented Apr 15, 2026

But what about relPages->rel_*_data_space stored at database, not attachment level? From new code it seems that they should be moved back to attachment from database? Am I missing something?

These fields not used in bulk insert code and thus was not affected in this patch.

At next step I'm going to:

leave as is:

	ULONG rel_index_root;		// index root page number
	USHORT rel_pg_space_id;

make atomic:

	ULONG rel_data_pages;		// count of relation data pages
	ULONG rel_slot_space;		// lowest pointer page with slot space
	ULONG rel_pri_data_space;	// lowest pointer page with primary data page space
	ULONG rel_sec_data_space;	// lowest pointer page with secondary data page space

remove:

	ULONG rel_last_free_pri_dp;	// last primary data page found with space
	ULONG rel_last_free_blb_dp;	// last blob data page found with space

@dyemanov dyemanov self-requested a review April 15, 2026 16:54
@aafemt
Copy link
Copy Markdown
Contributor

aafemt commented Apr 16, 2026

I think term "bulk insert" is a little misleading here. Oracle uses "direct-path insert" for the path where data go straight into data pages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants