BACKGROUND

A single physical platform may be segregated into a plurality of virtual networks. Here, the physical platform incorporates at least one virtual machine monitor (VMM). A conventional VMM typically runs on a computer and presents to other software the abstraction of one or more virtual machines (VMs). Each VM may function as a self-contained platform, running its own "guest operating system" (i.e., an operating system (OS) hosted by the VMM) and other software, collectively referred to as guest software.

Processes running within a VM are provided with an abstraction of some hardware resources and may be unaware of other VMs within the system. Every VM assumes that it has full control over the hardware resources allocated to it. The VMM is an entity that is responsible for appropriately managing and arbitrating system resources among the VMs including, but not limited to, processors, input/out (I/O) devices and memory.

Peripheral component interconnect device (PCID) virtualization is a technique for providing an abstraction of a physical PCID(s) to the VMs. Through virtualization, the same physical PCID(s) can be shared by multiple VMs. In addition, PCID virtualization allows a VM to be presented with multiple instances of the same physical PCID. For example, a system may have a single physical PCID, but a VM may see multiple virtual PCIDs (VPCIDs), each of which interfaces with different components inside the physical platform and/or the external network to which the physical PCID is attached. The VPCID that is presented to a VM may be completely different than the actual physical PCID, thereby making it possible to expose features to the VM that may not exist in the actual physical hardware.

Virtualization of PCIDs involves the abstraction of a register set and the PCI configuration space of these devices. Virtualization of PCIDs requires efficient storage and tracking of the state and data information for each VPCID instance.

DESCRIPTION OF EMBODIMENTS

An apparatus and method for a generic, extensible and efficient data manager for virtual peripheral component interconnect devices (VPCIDs) are described. The VPCID data manager of the present invention maintains data and state information of VPCID instances. The framework of the data structure utilized by the VPCID data manager has the advantages of being efficient, extensible and generic, and therefore can be used for the virtualization of any type of PCI device. The VPCID data manager allows a VPCID to replicate itself and thus support multiple instances of itself across multiple VMs. In the following description, for purposes of explanation, numerous specific details are set forth. It will be apparent, however, to one skilled in the art that embodiments of the invention can be practiced without these specific details.

FIG. 1 illustrates one embodiment of an environment for the VPCID data manager, in which some embodiments of the present invention may operate. The specific components shown in FIG. 1 represent one example of a configuration that may be suitable for the invention and is not meant to limit the invention.

Referring to FIG. 1, an environment 100 for the VPCID data manager includes, but is not necessarily limited to, one or more VMs 102 through 106, a VMM 108 and platform hardware 110. Though three VMs are shown in FIG. 1, it is understood that any number of VMs may be present in environment100. Each of these components is described next in more detail.

VMs 102 through 106 each include one or more VPCIDs. In an embodiment of the invention, each VM in FIG. 1 has a unique ID. VMM 108 includes a VPCID data manager 112. VPCID data manager 112 includes a VPCID data structure 114. VPDIC data manager 112 uses VPCID data structure 114 to maintain data and state information of VPCID instances in environment 100. In an embodiment of the invention, VPCID data manager 112 is agnostic of the virtualization model used by VMM 108 (e.g., hypervisor, host-based, hybrid, and so forth). Other types of virtualization models may be added or substituted for those described as new types of virtualization models are developed and according to the particular application for the invention. Finally, platform hardware 110 includes a physical PCID 116.

In general, PCID virtualization is a technique for providing an abstraction of a physical PCID(s), such as PCID 116, to the VMs, such as VM 102 through106. Through virtualization, the same physical PCID(s) can be shared by multiple VMs. In addition, PCID virtualization allows a VM to be presented with multiple instances of the same physical PCID. For example, a system may have a single physical PCID, but a VM may see multiple virtual PCIDs (VPCIDs), each of which interfaces with different components inside the physical platform and/or the external network to which the physical PCID is attached. The VPCID that is presented to a VM may be completely different than the actual physical PCID, thereby making it possible to expose features to the VM that may not exist in the actual physical hardware.

As described above, platform hardware 110 includes physical PCID 116. Although only one PCID is shown in FIG. 1, it is understood that any number of PCIDs may be present in environment 100. Platform hardware 110 can be of a personal computer (PC), mainframe, handheld device, portable computer, set-top box, or any other computing system. Platform hardware 110 may include one or more processors and memory (not shown in FIG. 1). Additionally, platform hardware 110 may include memory and a variety of other input/output devices (also not shown in FIG. 1).

The processors in platform hardware 110 can be any type of processor capable of executing software, such as hyper-threaded, SMP, multi-core, microprocessor, digital signal processor, microcontroller, or the like, or any combination thereof. Other types of processors may be added or substituted for those described as new types of processors are developed and according to the particular application for environment 100. The processors may include, but are not necessarily limited to, microcode, macrocode, software, programmable logic, hard coded logic, etc., for performing the execution of embodiments for methods of the present invention.

The memory of platform hardware 110 can be any type of recordable/non-recordable media (e.g., random access memory (RAM), read only memory (ROM), magnetic disk storage media, optical storage media, flash memory devices, etc.), as well as electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), any combination of the above devices, or any other type of machine medium readable by the processors. Other types of recordable/non-recordable media may be added or substituted for those described as new types of recordable/non-recordable are developed and according to the particular application for the invention. Memory may store instructions for performing the execution of method embodiments of the present invention.

In environment 100, the platform hardware 110 comprises a computing platform, which may be capable, for example, of executing a standard operating system (OS) or a virtual machine monitor (VMM), such as a VMM 108. VMM 108, though typically implemented in software, may emulate and export a bare machine interface to higher level software. Such higher level software may comprise a standard or real-time OS, may be a highly stripped down operating environment with limited operating system functionality, or may not include traditional OS facilities. Alternatively, for example, VMM 108 may be run within, or on top of, another VMM. VMMs and their typical features and functionality are well known by those skilled in the art and may be implemented, for example, in software, firmware, hardware or by a combination of various techniques.

In an embodiment of the invention, each VPCID in VM 102 through 106 owns regions in at least two of three virtual address spaces (not shown in FIG. 1). These regions include the virtual PCI configuration space and at least one of the two following regions: the virtual I/O space and the virtual memory space. The region in virtual PCI configuration space is where the PCID configuration registers reside, which include identification registers such as the device ID and vendor ID, I/O base address registers and memory base address registers. The regions in virtual I/O space and virtual memory space include the command and status registers (CSRs), the receive and transmit DMA configuration registers, statistics registers and other device configuration registers. The I/O and memory base address registers represent the base address of the IO/memory-mapped region for hosting the device's CSRs.

PCID virtualization allows a VM to be presented with multiple instances of the same physical PCID. A VPCID instance can be uniquely identified by the unique ID of the VM that hosts the VPCID, the type of address space access (configuration, I/O or memory) and the actual address accessed within that space. Every VPCID instance needs associated state blobs that contain state and data information. State blobs include, but are not necessarily limited to, an Electrically Erasable Programmable Read-Only Memory (EEPROM) map and direct memory access (DMA) engine states. Since the data and state information for each VPCID instance are accessed frequently, the mechanism for storing and retrieving them must be efficient. Frequent memory accesses can be cached. An example of this includes the polling of status registers. VPCID data manger 112 utilizes VPCID data structure 114to accomplish the foregoing. VPCID data structure 114 is further described next with reference to FIG. 2.

Referring to FIG. 2, the framework of VPCID data structure 114 has the advantages of being efficient, extensible and generic, and therefore can be used for the virtualization of any type of PCI device. The root structure of VPCID data structure 114 is a VM ID array 202. In an embodiment of the invention, each VM has a unique ID. Each unique VM ID serves as an index into the elements of VM ID array 202. Each element of VM ID array 202represents a unique VM. Associated with every VM element in array 202 is a set of VPCID instances and a cache of VPCID instance pointers.

The instance pointer cache of each element of VM ID array 202 represents the list of recently accessed addresses and associated VPCID instance pointers. Thus, for frequently accessed addresses, this cache allows immediate retrieval of the associated VPCID instance structure. Each element of VM ID array 202 also has three hash tables pointers associated with it, including a configuration hash table pointer, an I/O hash table pointer and a memory hash table pointer. The configuration hash table pointer points to configuration access ranges 204, the I/O hash table pointer points to I/O access ranges 206 and the memory hash table pointer points to memory access ranges 208. Entries in each of configuration access ranges 204, I/O access ranges 206 and memory access ranges 208 point to the VPCID instances in a VPCID instance array 210 that own the address access ranges.

VPCID instance array 210 is an array of VPCID instances. Each VPCID instance in VPCID instance array 210 includes, but is not necessarily limited to, the following elements: a memory base, an I/O base, a configuration base and a data blob pointer. As described above, a VPCID instance can be uniquely identified by the unique ID of the VM that hosts the VPCID, the type of address space access (configuration, I/O, or memory) and the actual address accessed within that space. The memory base, I/O base, and configuration base addresses are used for validating/determining if the actual address being accessed is within the appropriate address range. Every VPCID instance in VPCID instance array 210 has an associated array of data blobs 212.

Data blobs 212 store VPCID specific state and data information for its associated VPCID instance. Data blobs 212 include, but are not necessarily limited to, the following elements: an Electrically Erasable Programmable Read-Only Memory (EEPROM) map and configuration registers. The EEPROM map represents the device EEPROM that is used to hold various product specific configuration information. This is used to provide pre-boot configuration. The configuration registers include registers which are used for configuring VPCID features including receive and transmit DMA engines, power management parameters, VLAN configuration etc. Data blobs 212 may be implemented as an array, linked list, hash table, or a different data structure depending on the application. Embodiments of the operation of how VPCID data manager 112 utilizes VPCID data structure 114 to provide a generic, extensible and efficient data manager for VPCID instances are described next with reference to FIGS. 3-7.

FIG. 3 is a flow diagram of one embodiment of a process for creating a VPCID instance. Referring to FIG. 3, the process begins at processing block302 where VPCID data structure 114 is allocated for the VPCID instance. Processing block 302 is described in more detail below with reference to FIG. 4.

At processing block 304, access ranges are added for the VPCID instance to either the I/O hash table (i.e., I/O access ranges 206) or the memory hash table (i.e., memory access ranges 208). As described above, each VPCID owns regions in the virtual PCID configuration space and in at least one of the virtual I/O space and the virtual memory space. Processing block 304 is described in more detail below with reference to FIG. 5.

At processing block 306, data blobs 212 are inserted for the VPCID instance. Processing block 306 is described in more detail below with reference to FIG. 6. The process of FIG. 3 ends at this point.

FIG. 4 is a flow diagram of one embodiment of a process for allocating VPCID data structure 114 (step 302 of FIG. 3). Referring to FIG. 4, the process begins at processing block 402 where the unique VM ID and the configuration base address of the VPCID instance is provided to VPCID data manager112.

At processing block 404, VPCID data manager 112 uses the unique VM ID and the configuration base address to index into VM ID array 202 and ultimately into the VM's configuration hash table or configuration access ranges 204 (via configuration hash table pointer).

At processing block 406, VPCID data manager 112 adds a pointer in the VM's configuration hash table (i.e., configuration access ranges 204) to the new VPCID instance array 210. The process of FIG. 4 ends at this point.

FIG. 5 is a flow diagram of one embodiment of a process for adding access ranges to either an I/O hash table or a memory hash table (step 304 of FIG. 3). Referring to FIG. 5, the process begins at processing block 502 where VPCID data manager 112 retrieves the VPCID instance pointer from the configuration hash table (i.e., configuration access ranges 204) of the VM to which the VPCID instance belongs. In an embodiment of the invention, this operation takes between O(1) and O(n) (where n is the total number of VPCID instances) depending on the quality of the hash function H. Note that O(n) is the worst case running time for the invention and therefore is not an average run time for the invention. A good hash function can distribute the VPCID instances evenly across the hash table so that every bucket holds one or very few VPCID instance pointers. Note that a cache lookup (via instance pointer cache in VM ID array 202) is done first to locate the VPCID instance pointer corresponding to the address.

At processing block 504, VPCID data manager 112 selects a bucket within the VM's I/O or memory hash table (i.e., I/O access ranges 206 or memory access ranges 208, respectively) by computing an index based on the I/O or memory base address and the size of the range being added.

At processing block 506, VPCID data manager 112 copies the VPCID instance pointer from the configuration hash table to the I/O or memory hash table. From this point on, the VPCID instance pointer is quickly retrieved whenever the device driver in the VM accesses an address within the range. The process of FIG. 5 ends at this point.

FIG. 6 is a flow diagram of one embodiment of a process for inserting data blobs associated to the VPCID instance (step 306 of FIG. 3). Referring to FIG. 6, the process begins at processing block 602 where VPCID data manager 112 retrieves the VPCID instance pointer from either the configuration hash table, the I/O hash table or the memory hash table of the VM to which the VPCID instance belongs. Note that computing a hash index is only needed if the instance pointer is not already stored in the instance pointer cache.

At processing block 604, VPCID data manager 112 inserts the data blob into the list of data blobs associated with the VPCID instance. The process of FIG. 5 ends at this point.

FIG. 7 is a flow diagram of one embodiment of a process for accessing a data blob associated with a VPCID. Referring to FIG. 7, the process begins at processing block 702 where VPCID data manager 112 looks up the VPCID instance pointer in VPCID instance array 210.

At processing block 704, VPCID data manager 112 accesses data blob 212 via the VPCID instance pointer. Note also that once the VPCID instance pointer is retrieved, accessing a data blob is an O(1) lookup operation since it simply involves accessing data blob 212 with the specified index value. The process of FIG. 7 ends at this point.

SRC=http://www.freepatentsonline.com/7484210.html

PatentTips - Apparatus and method for a generic, extensible and efficient data manager for virtual peripheral component interconnect devices (VPCIDs)的更多相关文章

  1. PatentTips - System and method to deprivilege components of a virtual machine monitor

    BACKGROUND INFORMATION An embodiment of the present invention relates generally to virtualization pl ...

  2. PatentTips - Device virtualization and assignment of interconnect devices

    BACKGROUND Standard computer interconnects, particularly for personal computers or workstations, may ...

  3. DDoS ATTACK PROCESSING APPARATUS AND METHOD IN OPENFLOW SWITCH

    An OpenFlow switch in an OpenFlow environment includes an attack determination module to collect sta ...

  4. Parameter 0 of method redisTemplate in org.springframework.boot.autoconfigure.data.redis.RedisAutoConfiguration required a bean of type 'org.springframework.data.redis.connection.RedisConnectionFactor

    Error starting ApplicationContext. To display the conditions report re-run your application with 'de ...

  5. PatentTips - Method, apparatus and system for instructing a virtual device from a virtual machine

    BACKGROUND OF THE INVENTION A virtual machine (VM) may be or include a framework or environment crea ...

  6. Method, apparatus, and system for speculative abort control mechanisms

    An apparatus and method is described herein for providing robust speculative code section abort cont ...

  7. Method and apparatus for verification of coherence for shared cache components in a system verification environment

    A method and apparatus for verification of coherence for shared cache components in a system verific ...

  8. Method and apparatus for speculative execution of uncontended lock instructions

    A method and apparatus for executing lock instructions speculatively in an out-of-order processor ar ...

  9. Method and apparatus for an atomic operation in a parallel computing environment

    A method and apparatus for a atomic operation is described. A method comprises receiving a first pro ...

随机推荐

  1. codeforces 140E.New Year Garland

    传送门: 解题思路: 要求相邻两行小球颜色集合不同,并且限制行内小球相邻不同. 由此可得:每行小球排列都是独立与外界的, 所以答案应该是对于所有行的颜色集合分类,在将行内的答案乘到上面. 先考虑如何分 ...

  2. Hi3531D搭建环境时,出现的问题

    1.展开SDK包得时候,运行./sdk.unpack得时候出现: 原因:ubuntu14.04中默认得是dash,要将dash改成bash. 解决方法:sudo ln -fs /bin/bash /b ...

  3. 【Uva 1629】 Cake slicing

    [Link]: [Description] 给你一个n*m的格子; 然后里面零零散散地放着葡萄 让你把它切成若干个小矩形方格 使得每个小矩形方格都恰好包含有一个葡萄. 要求切的长度最短; 问最短的切割 ...

  4. 【editplus经常用的快捷键】Editplus 选中一行ctrl+r,Edit 合并行 Ctrl+Shift+J 合并选定行 删除当前行

    Editplus 选中一行: ctrl+rEditplus 复制一行: ctrl+r选择行,然后ctrl+c复制.复制一行到下一行中:Editplus有:Ctrl+j 复制上一行的一个字符到当前行Ed ...

  5. Linux下Oracle的sqlplus中上下左右退格键无法使用

    一.配置yum源并安装readline* 配置本地yum 1.挂载光盘 mount /dev/cdrom /mnt/media 2,新建本地yun源的配置文件 vi /etc/yum.repos.d/ ...

  6. OpenCASCADE Job - 武汉中南

    中南设计集团(武汉)工程技术研究院有限公司是中南工程咨询设计集团有限公司(以下简称“中南设计集团”)打造的工程技术研发和科研创新平台,为中南设计集团旗下全资子公司,于2018年2月成立.公司业务范围涵 ...

  7. OpenCascade Sweep Algorithm

    OpenCascade Sweep Algorithm eryar@163.com Abstract. Sweeps are the objects you obtain by sweeping a ...

  8. 深入解析开源项目之Universal-Image-Loader(二)硬盘---缓存篇

    文件命名: FileNameGenerator,HashCodeFileNameGenerator,Md5FileNameGenerator package com.nostra13.universa ...

  9. android framework 01

    .(由下向上启动),Uboot引导内核(linux Kernel)启动,把内核从flash放到内存中,引导内核启动.内核是系统的核心,负责进程的管理内存的管理网络的管理.内核(Linux Kenel) ...

  10. Excel的版本

    https://en.wikipedia.org/wiki/Microsoft_Excel 取自维基百科,需要特别注意的是,从v12开始,有很大的改变.后缀名从xls变为xlsx Versions 5 ...