Some notes taking from reading the paper ‘Memory resource management in VMware ESX server’.

overview

  • focus: over commiting memory
    • admission control
    • statical multiplexing

techniques for efficient memory usage

  • take pages from a VM, give to another VM -> reallocate pages
  • share common pages across VMs -> reduce the demand from pages

memory menagement

  • shadow pagetable: to execute without additional overhead, make use of TLB cache
  • P-map: continuation/ zero base address

over commited memory

  • process/OS: demand paging
    • OS applies LRU to select the page
    • if the page is dirty, write to swap on disk
    • reallocate the page
  • VM/hyperviser:
    • why not just use demand paging? -> unintended interaction: the double paging problem

baloon driver

  • achieve predictable performance by coaxing the guest OS into cooperateing with it when possible
  • inflating
    • invoke its own native memory management algotirhms to return a page
  • deflate
    • free up mem for general use within guest os
  • overhead: primarily due to guest OS data structures that are sized based on the amount of “physical” memory
  • limits: balloon may be uninstalled, disabled explicitly, unavailable when boosting, temporarily unable ro reclaim mem quickly enough to satisfy current sys demands

shared pages

  • code of an OS
  • code of user program
  • zero filled pages
  • most sharing come from redundant code and read-only data pages
  • how to determined two same page?
    • hash value for hint entry
    • bitwise comparison of pages
  • why slightly faster? fewer cache miss

idle page tax

  • bias VM selection to force VM with idle pages to give them up
  • figure out which VM has more idle pages
    • sampled page is tracked by invalidating any cached mappings associated with its PPN, such as hardware TLB entries and virtualized MMU state.

something else

上个月usc career fair的时候投了VMware家实习,和recruiter聊了一通简历/OS基础之后我尬吹了一波这篇论文…小哥表示哇我之前搞过ESX哦你说的挺对的, 然后过了两周我收到了拒信😂