[HPA] Add pool of empty pages to hpa_central #53
[HPA] Add pool of empty pages to hpa_central #53spredolac wants to merge 1 commit intofacebook:devfrom
Conversation
| return ps; | ||
| } | ||
|
|
||
| /* |
There was a problem hiding this comment.
This explains why certain things are needed in the following commit when this comment is not necessary anymore.
| typedef struct hpa_pool_s hpa_pool_t; | ||
| struct hpa_pool_s { | ||
| /* | ||
| * Pool of empty huge pages to be shared between shards that are |
|
Discussed offline and concluding the discussion here:
|
f916f29 to
2fb3bbc
Compare
|
Discussed offline and here is the latest update: By canarying on internal services with over 100M/s requests, we are seeing very trivial contention, i.e., 0 n_waiting/s, 40 n_lock_ops/s, and 0 total_wait_ns/s. Moreover, these experiments were conducted with a more aggressive policy which share the empty 2MB units once we have them empty, leading to heavier global hugepage pool usage. This means:
To conclude, with some nits, i.e., moving stats to "Merged arena stats", the PR is OK to merge. But we will probably hold this for a while until the planned cleanup and size rerouting is done. |
Motivation
If we already have huge and empty page it may be more efficient to donate it to another hpa shard, then purge and fault it again. This also helps somewhat with virtual address space as one would get the already allocated
hpdataobject if it is available in the pool before creating another one.On some of the internal workloads we see improvement in memory, and on some we saw small CPU improvements when using pool.
What
Two commits are plain refactor - moving central related code out of the
hpa.cand moving some utility functions into a separate module. Last commit is adding simple page pool which is effectively mutex guarded two lists (purged and non-purged empty folios) that each shard can borrow from.Testing
Added some unit tests, and also tested in production using internal workloads.