提交 f5ae3ba4 编写于 作者: A Andres Freund

Make tbm_add_tuples more efficient by caching the last acccessed page.

When adding a large number of tuples to a TID bitmap using
tbm_add_tuples() sometimes a lot of time was spent looking up a page's
entry in the bitmap's internal hashtable.

Improve efficiency by caching the last accessed page, while iterating
over the passed in tuples, hoping consecutive tuples will often be on
the same page.  In many cases that's a good bet, and in the rest the
added overhead isn't big.

Discussion: 54479A85.8060309@sigaev.ru

Author: Teodor Sigaev
Reviewed-By: David Rowley
上级 aa1d2fc5
......@@ -268,14 +268,14 @@ void
tbm_add_tuples(TIDBitmap *tbm, const ItemPointer tids, int ntids,
bool recheck)
{
int i;
int i;
PagetableEntry *page = NULL;
Assert(!tbm->iterating);
for (i = 0; i < ntids; i++)
{
BlockNumber blk = ItemPointerGetBlockNumber(tids + i);
OffsetNumber off = ItemPointerGetOffsetNumber(tids + i);
PagetableEntry *page;
int wordnum,
bitnum;
......@@ -283,10 +283,18 @@ tbm_add_tuples(TIDBitmap *tbm, const ItemPointer tids, int ntids,
if (off < 1 || off > MAX_TUPLES_PER_PAGE)
elog(ERROR, "tuple offset out of range: %u", off);
if (tbm_page_is_lossy(tbm, blk))
continue; /* whole page is already marked */
page = tbm_get_pageentry(tbm, blk);
if (page == NULL || page->blockno != blk)
{
if (tbm_page_is_lossy(tbm, blk))
continue; /* whole page is already marked */
/*
* Cache this page as it's quite likely that we'll see the same
* page again in the next iteration. This will save having to
* lookup the page in the hashtable again.
*/
page = tbm_get_pageentry(tbm, blk);
}
if (page->ischunk)
{
......@@ -303,7 +311,11 @@ tbm_add_tuples(TIDBitmap *tbm, const ItemPointer tids, int ntids,
page->recheck |= recheck;
if (tbm->nentries > tbm->maxentries)
{
tbm_lossify(tbm);
/* Cached page could become lossy or freed */
page = NULL;
}
}
}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册