`Doctor et al .
`
`US 10,944,811 B2
`( 10 ) Patent No .:
`( 45 ) Date of Patent :
`Mar. 9 , 2021
`
`USO10944811B2
`
`( 54 ) HYBRID CLOUD NETWORK MONITORING
`SYSTEM FOR TENANT USE
`( 71 ) Applicant : VMware , Inc. , Palo Alto , CA ( US )
`( 72 ) Inventors : Brad Doctor , Broomfield , CO ( US ) ;
`Matt Probst , Orem , UT ( US )
`( 73 ) Assignee : VMware , Inc. , Palo Alto , CA ( US )
`Subject to any disclaimer , the term of this
`( * ) Notice :
`patent is extended or adjusted under 35
`U.S.C. 154 ( b ) by 46 days .
`( 21 ) Appl . No .: 15 / 846,133
`( 22 ) Filed :
`Dec. 18 , 2017
`( 65 )
`
`Prior Publication Data
`Apr. 19 , 2018
`US 2018/0109602 A1
`
`Related U.S. Application Data
`( 63 ) Continuation of application No. 14 / 579,911 , filed on
`Dec. 22 , 2014 , now Pat . No. 9,860,309 .
`( 51 ) Int . Cl .
`H04L 29/08
`H04L 12/46
`H04L 12/26
`( 52 ) U.S. CI .
`CPC
`
`( 2006.01 )
`( 2006.01 )
`( 2006.01 )
`
`H04L 67/10 ( 2013.01 ) ; H04L 12/4633
`( 2013.01 ) ; H04L 43/062 ( 2013.01 ) ; H04L
`43/12 ( 2013.01 )
`Field of Classification Search
`CPC ... HO4L 67/10 ; H04L 12/4633 ; HO4L 43/062 ;
`HO4L 43/12
`USPC
`709/224
`See application file for complete search history .
`
`( 58 )
`
`( 56 )
`
`References Cited
`U.S. PATENT DOCUMENTS
`
`3/2011 Walker
`7,899,048 B1 *
`8,547,972 B2 * 10/2013 Mahdavi
`8,645,952 B2 *
`2/2014 Biswas
`
`8,665,747 B2 *
`
`3/2014 Elsen
`
`8,879,554 B2 * 11/2014 Emmadi
`8,996,691 B1 *
`3/2015 Stickle
`
`H04L 43/18
`370/390
`GO6F 9/455
`370/389
`H04L 49/208
`718/1
`HO4L 45/18
`370/254
`HO4L 49/30
`370/389
`H04L 12/40071
`709/224
`H04L 49/355
`
`9,397,960 B2 *
`
`7/2016 Arad
`( Continued )
`Primary Examiner Esther B. Henderson
`Assistant Examiner Nazia Naoreen
`( 74 ) Attorney , Agent , or Firm — Loza & Loza , LLP
`( 57 )
`ABSTRACT
`Network traffic in a cloud computing system is monitored in
`response to a request to capture network traffic of a tenant
`port of a first virtual machine ( VM ) executing in the cloud
`computing system , wherein the first VM is associated with
`a first tenant organization different from a second organiza
`tion managing the cloud computing system . A decapsulating
`VM having a first network interface and a second network
`interface is instantiated , wherein the decapsulating VM is
`inaccessible to the first tenant organization . An encapsulated
`port mirroring session from the tenant port of the first VM
`to the first network interface of the decapsulating VM is then
`established . A plurality of packets comprising captured
`network traffic received via the encapsulated port mirroring
`session are decapsulated , and the captured network traffic is
`forwarded via the second network interface of the decapsu
`lating VM to a sniffer VM .
`14 Claims , 4 Drawing Sheets
`
`Packet Capture Module 280
`Start
`
`Receive data packet
`from network
`
`Transmit data packet to
`destination VM
`
`Data packet
`to be monitored ?
`
`Yes
`
`Encapsulate data packet
`
`Transmit encapsulated
`packet to tunnel
`
`405
`
`410
`
`415
`
`No
`
`420
`
`425
`
`430
`
`Continue
`receiving packets ?
`
`Yes
`
`No
`
`400
`
`Decapsulator VM 240
`
`406
`
`407
`
`435
`
`440
`
`445
`
`450
`
`455
`
`Read mapping of
`tenants to sniffer VMS
`
`Start sending thread for
`each sniffer VM
`
`Receive encapsulated
`packet from tunnel
`
`Extract data packet from
`encapsulated packet
`
`Determine address of
`target sniffer VM
`
`Update data packet to
`include address of target
`sniffer VM
`
`Transmit updated data
`packet to target sniffer
`VM
`
`End
`
`460
`
`No
`
`Continue
`receiving packets ?
`
`Yes
`
`WIZ, Inc. EXHIBIT - 1012
`WIZ, Inc. v. Orca Security LTD.
`
`
`
`US 10,944,811 B2
`Page 2
`
`( 56 )
`
`References Cited
`U.S. PATENT DOCUMENTS
`
`2014/0185616 A1 *
`
`7/2014 Bloch
`
`2014/0279885 A1 *
`
`2015/0139232 A1 *
`
`9/2014 Anantharam
`5/2015 Yalagandula
`
`* cited by examiner
`
`HO4L 12/4633
`370/392
`HO4L 5/0055
`707/622
`G06F 9/45558
`370/392
`
`
`
`U.S. Patent
`
`Mar. 9 , 2021
`
`Sheet 1 of 4
`
`US 10,944,811 B2
`
`180
`
`182
`VM 172
`
`
`
`
`
`VM 172
`Cloud Computing Environment ( s ) 170
`VM 172
`Hybridity Director 174
`
`186
`
`VM 172
`
`VM 172
`
`Gateway
`
`184
`
`Catalog 166
`
`Cloud Director 152
`
`
`
`
`
`Cloud Computing System 150
`
`142
`
`Network 140
`
`Hybrid Cloud Manager 132
`
`Gateway 124
`
`
`
`Hypervisor 116
`
`VM 120N
`:
`VM 1202
`
`VM 1201
`
`Host ( s ) 104
`
`130
`
`Virtualization Manager
`
`w
`
`126
`
`
`
`
`
`I Virtualized Computing System 102
`
`100
`
`1
`
`Orchestration Component
`158
`Environment 156
`Virtualization
`
`
`
`Storage Array Network 164
`
`Host 162M
`
`Host 1621
`
`
`
`Hardware Resources 160
`
`1 | |
`
`122
`
`1
`
`-
`-
`
`Stor . 114
`NIC 112
`Mem 110
`CPU 108
`
`FIG . 1
`
`
`
`
`
`Infrastructure Platform 154
`
`
`
`
`
`Hardware Platform 106
`
`
`
`U.S. Patent
`
`Mar. 9 , 2021
`
`Sheet 2 of 4
`
`US 10,944,811 B2
`
`
`
`Host 1623
`
`Sniffer VM : 2502
`
`200
`
`
`
`Hybridity director 174
`
`Sniffer VM 2501
`
`Packet Capture Module 280
`
`Decapsulator VM 240
`
`
`
`Packet processor 242
`
`VNIC 278
`VNIC 276
`
`Tenant VM 1723
`Tenant : VM 1722
`Tenant VM 1721
`
`
`
`Host 1622
`
`
`
`Host 1621
`
`U
`
`272
`
`
`
`
`
`Distributed Virtual Switch 290
`
`
`
`Distributed Virtual Switch 270
`
`
`
`
`
`
`
`Hypervisor 2163
`
`
`
`Hypervisor 2162
`
`274
`
`
`
`Hypervisor 2161
`
`FIG . 2
`
`140
`
`
`
`U.S. Patent
`
`Mar. 9 , 2021
`
`Sheet 3 of 4
`
`US 10,944,811 B2
`
`Sending thread n
`
`Sending thread 1
`
`To sniffer VMs 250
`
`FIG . 3
`
`
`
`replace address
`
`Payload
`
`Sniffer addr
`
`extract
`
`Payload
`
`Tenant addr
`
`
`
`Tenant Sniffer Mapping 300
`
`132 Updates from hybrid cloud manager
`
`
`
`
`
`
`Payload
`
`Tenant addr
`
`
`
`Tunnel header
`
`Payload
`
`
`
`Packet processor 242
`
`Decapsulator VM 240
`
`132 Updates from hybrid cloud manager
`
`
`
`
`
`
`Tenant Monitoring List 310
`
`274
`
`To tenant VM 172
`
`Payload
`
`Tenant addr
`
`
`
`From network 140
`
`Payload
`
`Tenant addr
`
`
`
`
`
`Distributed virtual switch 270
`
`
`
`Tunnel header Tenant addr
`
`
`
`
`
`
`
`U.S. Patent
`
`Mar. 9 , 2021
`
`Sheet 4 of 4
`
`US 10,944,811 B2
`
`Packet Capture Module 280
`
`Start
`
`Receive data packet
`from network
`
`Transmit data packet to
`destination VM
`
`Data packet
`to be monitored ?
`
`Yes
`
`Encapsulate data packet
`
`Transmit encapsulated
`packet to tunnel
`
`405
`
`410
`
`415
`
`No
`
`420
`
`425
`
`430
`
`Continue
`receiving packets ?
`
`Yes
`
`No
`
`400
`
`Decapsulator VM 240
`
`406
`
`407
`
`435
`
`440
`
`445
`
`450
`
`455
`
`Read mapping of
`tenants to sniffer VMS
`
`Start sending thread for
`each sniffer VM
`
`Receive encapsulated
`packet from tunnel
`
`Extract data packet from
`encapsulated packet
`
`Determine address of
`target sniffer VM
`
`Update data packet to
`include address of target
`sniffer VM
`
`Transmit updated data
`packet to target sniffer
`VM
`
`460
`
`No
`
`Continue
`receiving packets ?
`
`Yes
`
`End
`
`FIG . 4
`
`
`
`1
`HYBRID CLOUD NETWORK MONITORING
`SYSTEM FOR TENANT USE
`
`US 10,944,811 B2
`
`2
`level network interfaces and network configuration data ,
`which cloud computing systems typically abstract or hide
`from tenant organizations .
`SUMMARY
`
`5
`
`CROSS - REFERENCE TO RELATED
`APPLICATIONS
`This application claims priority to U.S. application Ser .
`No. 14 / 579,911 , filed Dec. 22 , 2014 ( now U.S. Pat . No.
`9,860,309 ) , which is incorporated by reference herein in its
`entirety .
`
`In one embodiment , a method for monitoring network
`traffic in a cloud computing system is provide . The method
`comprises receiving a request to capture network traffic of a
`10 tenant port of a first virtual machine ( VM ) executing in the
`cloud computing system , wherein the first VM is associated
`with a first tenant organization different from a second
`BACKGROUND
`organization managing the cloud computing system . The
`method further comprises instantiating a decapsulating VM
`Commercial enterprises are frequently turning to public
`cloud providers to meet their computing needs . The benefits 15 having a first network interface and a second network
`interface , wherein the decapsulating VM is inaccessible to
`of cloud computing are numerous . Among the benefits are
`the first tenant organization . The method further comprises
`lower operating costs , due to reduced spending on comput
`establishing an encapsulated port mirroring session from the
`ing hardware , software , and support . In addition , since
`tenant port of the first VM to the first network interface of
`public clouds are generally accessible from any network- 20 the decapsulating VM , and decapsulating , by execution of
`connected device , applications deployed to the cloud are
`the decapsulating VM , a plurality of packets comprising
`more easily distributed to a diverse and global workforce .
`captured network traffic received via the encapsulated port
`Cloud architectures are used in cloud computing and
`mirroring session . The method further comprises forwarding
`cloud storage systems for offering infrastructure - as - a - ser
`the captured network traffic via the second network interface
`vice ( IaaS ) cloud services . Examples of cloud architectures 25 of the decapsulating VM to a sniffer VM .
`include the VMware vCloudTM Director cloud architecture
`Further embodiments provide a non - transitory computer
`software , Amazon EC2TM web service , and OpenStackTM
`readable medium that includes instructions that , when
`open source cloud computing service . IaaS cloud service is
`executed , enable one or more computer hosts to implement
`a type of cloud service that provides access to physical
`one or more aspects of the above method , and a cloud - based
`and / or virtual resources in a cloud environment . These 30 computing system that includes one or more computer hosts
`services provide a tenant application programming interface
`programmed to implement one or more aspects of the above
`( API ) that supports operations for manipulating IaaS con
`method .
`structs such as virtual machines ( VMs ) and logical net
`BRIEF DESCRIPTION OF THE DRAWINGS
`works . However , the use of such public cloud services is
`typically kept separate from the use of existing computing
`FIG . 1 is a block diagram of a hybrid cloud computing
`resources in data centers managed by an enterprise .
`system in which one or more embodiments of the present
`Customers of cloud computing services are often referred
`disclosure may be utilized .
`to as " tenants , " as the customers more or less " rent " com
`FIG . 2 is a block diagram depicting a public cloud - based
`puting hardware and software services from the cloud pro- 40 computing system , according to one or more embodiments .
`vider . Since a single public cloud can host many clients
`FIG . 3 is a conceptual diagram depicting components that
`simultaneously in an isolated manner , public clouds are
`facilitate monitoring of network traffic for public cloud
`referred to
`multi - tenant mputing environments . In order
`based tenants , according to one or more embodiments .
`to provide a level of isolation between applications deployed
`FIG . 4 is a flow diagram that depicts one embodiment of
`in the cloud by different tenants , cloud providers often 45 a method for receiving and routing data packets to public
`provision virtual machines for their tenants . Each tenant
`cloud - based monitoring devices , each monitoring device
`virtual machine is capable of executing one or more client
`corresponding to a public cloud - based tenant .
`applications . The tenant virtual machine runs on top of a
`To facilitate understanding , identical reference numerals
`virtualized computing platform provided by the cloud , and ,
`have been used , where possible , to designate identical
`using the virtualized computing platform , communicates 50 elements that are common to the figures . It is contemplated
`with other cloud tenants , as well as with external entities
`that elements disclosed in one embodiment may be benefi
`outside of the cloud . The tenant virtual machine is designed
`cially utilized on other embodiments without specific reci
`to give the individual tenant a reasonable level of control
`tation .
`over computing services provided by the tenant , without
`having an undue effect on other tenants .
`DETAILED DESCRIPTION
`Among the tasks that tenants seek to perform is the
`FIG . 1 is a block diagram of a hybrid cloud computing
`monitoring of network traffic that is transmitted to and from
`system 100 in which one or more embodiments of the
`virtual machines managed by a tenant and that may be
`present disclosure may be utilized . Hybrid cloud computing
`executing virtual workloads . Monitoring network traffic
`enables tenant organizations to , for example , troubleshoot 60 system 100 includes a virtualized computing system 102 and
`problems with that virtual machine , gauge future capacity
`a cloud computing system 150 , and is configured to provide
`requirements , or to track down the source of malicious
`a common platform for managing and executing virtual
`network requests ( such as those experienced in a denial of
`workloads seamlessly between virtualized computing sys
`service attack on the tenant virtual machine ) . However , there
`tem 102 and cloud computing system 150. In one embodi
`are challenges to using traffic monitoring devices ( often 65 ment , virtualized computing system 102 may be a data
`referred to as network " sniffers ” ) in a cloud computing
`center controlled and administrated by a particular enterprise
`system . Sniffer applications rely on special access to low
`or business organization , while cloud computing system 150
`
`55
`
`35
`
`
`
`US 10,944,811 B2
`
`4
`3
`Virtualized computing system 102 includes a virtualiza
`is operated by a cloud computing service provider and
`tion management module ( depicted in FIG . 1 as virtualiza
`exposed as a service available to account holders , such as the
`tion manager 130 ) that may communicate to the plurality of
`particular enterprise in addition to other enterprises . As such ,
`hosts 104 via a network , sometimes referred to as a man
`virtualized computing system 102 may sometimes be
`referred to as an on - premise data center ( s ) , and cloud 5 agement network 126. In one embodiment , virtualization
`computing system 150 may be referred to as a “ public ”
`manager 130 is a computer program that resides and
`cloud service . In some embodiments , virtualized computing
`executes in a central server , which may reside in virtualized
`system 102 itself may be configured as a private cloud
`computing system 102 , or alternatively , running as a VM in
`service provided by the enterprise .
`one of hosts 104. One example of a virtualization manage
`As used herein , an internal cloud or “ private ” cloud is a 10 ment module is the vCenter® Server product made available
`cloud in which a tenant and a cloud service provider are part
`from VMware , Inc. Virtualization manager 130 is config
`of the same organization , while an external or “ public ” cloud
`ured to carry out administrative tasks for virtualized com
`is a cloud that is provided by an organization that is separate
`puting system 102 , including managing hosts 104 , managing
`from a tenant that accesses the external cloud . For example ,
`VMs 120 running within each host 104 , provisioning VMs ,
`the tenant may be part of an enterprise , and the external 15 migrating VMs from one host to another host , and load
`cloud may be part of a cloud service provider that is separate
`balancing between hosts 104 .
`from the enterprise of the tenant and that provides cloud
`In one embodiment , virtualization manager 130 includes
`services to different enterprises and / or individuals . In
`a hybrid cloud management module ( depicted as hybrid
`embodiments disclosed herein , a hybrid cloud is a cloud
`cloud manager 132 ) configured to manage and integrate
`architecture in which a tenant is provided with seamless 20 virtualized computing resources provided by cloud comput
`access to both private cloud resources and public cloud
`ing system 150 with virtualized computing resources of
`virtualized computing system 102 to form a unified “ hybrid ”
`resources .
`Virtualized computing system 102 includes one or more
`computing platform . Hybrid cloud manager 132 is config
`ured to deploy VMs in cloud computing system 150 , transfer
`host computer systems 104. Hosts 104 may be constructed
`on a server grade hardware platform 106 , such as an x86 25 VMs from virtualized computing system 102 to cloud com
`architecture platform , a desktop , and a laptop . As shown ,
`puting system 150 , and perform other “ cross - cloud ” admin
`hardware platform 106 of each host 104 may include con-
`istrative task , as described in greater detail later . In one
`ventional components of a computing device , such as one or
`implementation , hybrid cloud manager 132 is a module or
`more processors ( CPUs ) 108 , system memory 110 , a net-
`plug - in complement to virtualization manager 130 , although
`work interface 112 , storage 114 , and other I / O devices such 30 other implementations may be used , such as a separate
`as , for example , a mouse and keyboard ( not shown ) . Pro-
`computer program executing in a central server or running
`cessor 108 is configured to execute instructions , for
`in a VM in one of hosts 104 .
`example , executable instructions that perform one or more
`In one embod
`hybrid cloud manager 132 is config
`operations described herein and may be stored in memory
`ured to control network traffic into network 122 via a
`110 and in local storage . Memory 110 is a device allowing 35 gateway component ( depicted as a gateway 124 ) . Gateway
`information , such as executable instructions , cryptographic
`124 ( e.g. , executing as a virtual appliance ) is configured to
`keys , virtual disks , configurations , and other data , to be
`provide VMs 120 and other components in virtualized
`stored and retrieved . Memory 110 may include , for example ,
`computing system 102 with connectivity to an external
`one or more random access memory ( RAM ) modules .
`network 140 ( e.g. , Internet ) . Gateway 124 may manage
`Network interface 112 enables host 104 to communicate 40 external public IP addresses for VMs 120 and route traffic
`with another device via a communication medium , such as
`incoming to and outgoing from virtualized computing sys
`a network 122 within virtualized computing system 102 .
`tem 102 and provide networking services , such as firewalls ,
`Network interface 112 may be one or more network adapt-
`network address translation ( NAT ) , dynamic host configu
`ers , also referred to as a Network Interface Card ( NIC ) .
`ration protocol ( DHCP ) , load balancing , and virtual private
`Storage 114 represents local storage devices ( e.g. , one or 45 network ( VPN ) connectivity over a network 140 .
`more hard disks , flash memory modules , solid state disks ,
`In one or more embodiments , cloud computing system
`and optical disks ) and / or a storage interface that enables host
`150 is configured to dynamically provide an enterprise ( or
`104 to communicate with one or more network data storage
`users of an enterprise ) with one or more virtual data centers
`systems . Examples of a storage interface are a host bus
`180 in which a user may provision VMs 120 , deploy
`adapter ( HBA ) that couples host 104 to one or more storage 50 multi - tier applications on VMs 120 , and / or execute work
`arrays , such as a storage area network ( SAN ) or a network-
`loads . Cloud computing system 150 includes an infrastruc
`attached storage ( NAS ) , as well as other network data
`ture platform 154 upon which a cloud computing environ
`ment 170 may be executed . In the particular embodiment of
`storage systems .
`Each host 104 is configured to provide a virtualization
`FIG . 1 , infrastructure platform 154 includes hardware
`layer that abstracts processor , memory , storage , and net- 55 resources 160 having computing resources ( e.g. , hosts 1621
`working resources of hardware platform 106 into multiple
`to 162N ) , storage resources ( e.g. , one or more storage array
`virtual machines 1201 to 120x ( collectively referred to as
`systems , such as SAN 164 ) , and networking resources ,
`VMs 120 ) that run concurrently on the same hosts . VMs 120
`which are configured in a manner to provide a virtualization
`run on top of a software interface layer , referred to herein as
`environment 156 that supports the execution of a plurality of
`a hypervisor 116 , that enables sharing of the hardware 60 virtual machines 172 across hosts 162. It is recognized that
`resources of host 104 by VMs 120. One example of hyper-
`hardware resources 160 of cloud computing system 150 may
`visor 116 that may be used in an embodiment described
`in fact be distributed across multiple data centers in different
`herein is a VMware ESXi hypervisor provided as part of the
`locations .
`VMware vSphere solution made commercially available
`Each cloud computing environment 170 is associated with
`from VMware , Inc. Hypervisor 116 may run on top of the 65 a particular tenant of cloud computing system 150 , such as
`operating system of host 104 or directly on hardware com-
`the enterprise providing virtualized computing system 102 .
`In one embodiment , cloud computing environment 170 may
`ponents of host 104 .
`
`
`
`US 10,944,811 B2
`
`5
`6
`In the embodiment of FIG . 1 , cloud computing environ
`be configured as a dedicated cloud service for a single tenant
`ment 170 supports the creation of a virtual data center 180
`comprised of dedicated hardware resources 160 ( i.e. , physi-
`having a plurality of virtual machines 172 instantiated to , for
`cally isolated from hardware resources used by other users
`example , host deployed multi - tier applications . A virtual
`of cloud computing system 150 ) . In other embodiments ,
`cloud computing environment 170 may be configured as part 5 data center 180 is a logical construct that provides compute ,
`network , and storage resources to an organization . Virtual
`of a multi - tenant cloud service with logically isolated vir
`data centers 180 provide an environment where VM 172 can
`tualized computing resources on a shared physical infra
`be created , stored , and operated , enabling complete abstrac
`structure . As shown in FIG . 1 , cloud computing system 150
`tion between the consumption of infrastructure service and
`may support multiple cloud computing environments 170 ,
`available to multiple enterprises in single - tenant and multi- 10 underlying resources . VMs 172 may be configured similarly
`to VMs 120 , as abstractions of processor , memory , storage ,
`tenant configurations .
`and networking resources of hardware resources 160 .
`In one embodiment , virtualization environment 156
`Virtual data center 180 includes one or more virtual
`includes an orchestration component 158 ( e.g. , implemented
`networks 182 used to communicate between VMs 172 and
`as a process running in a VM ) that provides infrastructure 15 managed by at least one networking gateway component
`resources to cloud computing environment 170 responsive
`( e.g. , gateway 184 ) , as well as one or more isolated internal
`to provisioning requests . For example , if an enterprise
`networks 186 not connected to gateway 184. Gateway 184
`required a specified number of virtual machines to deploy a
`( e.g. , executing as a virtual appliance ) is configured to
`web applications or to modify ( e.g. , scale ) a currently
`provide VMs 172 and other components in cloud computing
`running web application to support peak demands , orches- 20 environment 170 with conn onnectivity to an external network
`tration component 158 can initiate and manage the instan-
`140 ( e.g. , Internet ) . Gateway 184 manages external public IP
`tiation of virtual machines ( e.g. , VMs 172 ) on hosts 162 to
`addresses for virtual data center 180 and one or more private
`support such requests . In one embodiment , orchestration
`internal networks interconnecting VMs 172. Gateway 184 is
`component 158 instantiates virtual machines according to a
`configured to route traffic incoming to and outgoing from
`requested template that defines one or more virtual machines 25 virtual data center 180 and provide networking services ,
`having specified virtual computing resources ( e.g. , compute ,
`such as firewalls , network address translation ( NAT ) ,
`networking , storage resources ) . Further , orchestration com-
`dynamic host configuration protocol ( DHCP ) , and load
`ponent 158 monitors the infrastructure resource consump-
`balancing . Gateway 184 may be configured to provide
`tion levels and requirements of cloud computing environ-
`virtual private network ( VPN ) connectivity over a network
`ment 170 and provides additional infrastructure resources to 30 140 with another VPN endpoint , such as a gateway 124
`cloud computing environment 170 as needed or desired . In
`within virtualized computing system 102. In other embodi
`one example , similar to virtualized computing system 102 ,
`ments , gateway 184 may be configured to connect to com
`virtualization environment 156 may be implemented by
`municate with virtualized computing system 102 using a
`running on hosts 162 VMware ESXTM - based hypervisor
`high - throughput , dedicated link ( depicted as a direct connect
`technologies provided by VMware , Inc. of Palo Alto , Calif . 35 142 ) between virtualized computing system 102 and cloud
`( although it should be recognized that any other virtualiza-
`computing system 150. In one or more embodiments , gate
`tion technologies , including Xen and Microsoft Hyper - V
`ways 124 and 184 are configured to provide a “ stretched ”
`virtualization technologies may be utilized consistent with
`layer - 2 ( L2 ) network that spans virtualized computing sys
`the teachings herein ) .
`tem 102 and virtual data center 180 , as shown in FIG . 1 .
`In one embodiment , cloud computing system 150 may 40
`While FIG . 1 depicts a single connection between on
`include a cloud director 152 ( e.g. , run in one or more virtual
`premise gateway 124 and cloud - side gateway 184 for illus
`machines ) that manages allocation of virtual computing
`tration purposes , it should be recognized that multiple con
`resources to an enterprise for deploying applications . Cloud
`nections between multiple on - premise gateways 124 and
`director 152 may be accessible to users via a REST ( Rep-
`cloud - side gateways 184 may be used . Furthermore , while
`resentational State Transfer ) API ( Application Programming 45 FIG . 1 depicts a single instance of a gateway 184 , it is
`Interface ) or any other client - server communication proto-
`recognized that gateway 184 may represent multiple gate
`col . Cloud director 152 may authenticate connection
`way components within cloud computing system 150. In
`attempts from the enterprise using credentials issued by the
`some embodiments , a separate gateway 184 may be
`cloud computing provider . Cloud director 152 maintains and
`deployed for each virtual data center , or alternatively , for
`publishes a catalog 166 of available virtual machine tem- 50 each tenant . In some embodiments , a gateway instance may
`plates and packaged virtual machine applications that rep-
`be deployed that manages traffic with a specific tenant , while
`resent virtual machines that may be provisioned in cloud
`a separate gateway instance manages public - facing traffic to
`computing environment 170. A virtual machine template is
`the Internet . In yet other embodiments , one or more gateway
`a virtual machine image that is loaded with a pre - installed
`instances that are shared among all the tenants of cloud
`guest operating system , applications , and data , and is typi- 55 computing system 150 may be used to manage all public
`cally used to repeatedly create a VM having the pre - defined
`facing traffic incoming and outgoing from cloud computing
`configuration . A packaged virtual machine application is a
`system 150 .
`logical container of pre - configured virtual machines having
`In one embodiment , each virtual data center 180 includes
`software components and parameters that define operational
`a “ hybridity ” director module ( depicted as hybridity director
`details of the packaged application . An example of a pack- 60 174 ) configured to communicate with the corresponding
`aged VM application is vAppTM technology made available
`hybrid cloud manager 132 in virtualized computing system
`by VMware , Inc. , of Palo Alto , Calif . , although other tech-
`102 to enable a common virtualized computing platform
`nologies may be utilized . Cloud director 152 receives pro-
`between virtualized computing system 102 and cloud com
`visioning requests submitted ( e.g. , via REST API calls ) and
`puting system 150. Hybridity director 174 ( e.g. , executing as
`may propagates such requests to orchestration component 65 a virtual appliance ) may communicate with hybrid cloud
`158 to instantiate the requested virtual machines ( e.g. , VMs
`manager 132 using Internet - based traffic via a VPN tunnel
`established between gateways 124 and 184 , or alternatively ,
`172 ) .
`
`
`
`US 10,944,811 B2
`
`7
`8
`comprised , in embodiments , of separate individual virtual
`using direct connect 142. In one embodiment , hybridity
`switches , each of which exists on a corresponding host 162 .
`director 174 may control gateway 184 to control network
`Hypervisors 216 , and 2162 jointly manage the ports allo
`traffic into virtual data center 180. In some embodiments ,
`cated to distributed virtual switch 270. As shown in FIG . 2 ,
`hybridity director 174 may control VMs 172 and hosts 162
`of cloud computing system 150 via infrastructure platform 5 each of tenant VMs 1721-3 connects to distributed virtual
`switch 270. Each tenant VM 172 includes one or more
`154 .
`As previously mentioned , network monitoring presents
`virtual NICs , which emulate a physical network adapter for
`the corresponding VM . Each virtual NIC ( or VNIC ) trans
`challenges in cloud - computing environments . First , some
`known approaches to network monitoring may be unsuitable
`mits and receives packets to and from distributed virtual
`due to the underlying infrastructure of cloud computing 10 switch 270. Each tenant VM 172 communicates with dis
`system 150. For example , one known approach to network
`tributed virtual switch 270 by connecting to a virtual port .
`monitoring which requires a ( source ) port being monitored
`For example , tenant VM 172 , is communicatively connected
`to be on the same physical host as a destination port doing
`to a virtual port 272 of distributed virtual switch 270
`the monitoring would be impractical for cloud computing
`( sometimes referred to as a “ tenant port ” ) . When a source
`systems configured to dynamically and automatically 15 tenant VM 172 ( e.g. , tenant VM 172 , ) transmits a data
`migrate VMs between physical hosts . Second , a tenant
`packet to a remote destination , the source tenant VM 1721
`organization may not have access to the management
`addresses the data packet ( by appropriately updating a
`layer ( s ) or the physical hardware of cloud computing system
`packet header ) and transmits the data packet ( via a corre
`150. Accordingly , embodiments of the present disclosure
`sponding virtual NIC ) to distributed virtual switch 270 .
`provide a technique for monitoring VM traffic in a public 20
`In one or more embodiments , distributed virtual switch
`cloud computing system context , which has an underlying
`270 is configured to establish a port mirroring session that
`network architecture configured to maintain data privacy
`sends traffic of a particular network interface to an applica
`tion that analyzes network traffic . The port mirroring session
`amongst tenant VMs .
`FIG . 2 is a block diagram that illustrates a public cloud-
`copies packets ( ingress , egress , or both ) at a “ source ” port
`based computing system 200 having a virtual sniffer 25 and sends the copies of the packets to a “ destination ” port for
`deployed therein to monitor network traffic for cloud ten-
`analysis . In one particular embodiment , the port mirroring
`ants , according to embodiments . In one embodiment , cloud-
`session may be an Encapsulated Remote Switched Port
`based computing system 200 may be similar to , and / or
`Analyzer ( ERSPAN ) configured to encapsulate the copied
`include , one or more components of cloud computing sys-
`packets using a generic routing encapsulation ( GRE ) , which
`tem 150 , depicted in FIG . 1. As shown , cloud - based com- 30 allows the copied packets to be transported across Layer 3
`puting system 200 includes a plurality of multiple host
`domains . In the embodiment shown in FIG . 2 , distributed
`computers 162. In the embodiment shown in FIG . 2 , cloud-
`virtual switch 270 is configured to establish a port mirroring