Create an NSX Edge cluster

Now that we have created our Edges we need to associate them with a new Edge Cluster.

From the Manager Console navigate to System -> Fabric -> Nodes -> Edge Cluster.

Select + ADD EDGE CLUSTER

Enter the name of the new Edge Cluster, the profile is proposed automatically.

In the section below, select the previously created Edges and move them within the cluster with the right arrow. Give confirmation of creation with the ADD button.

The cluster is listed with some of its characteristics.

Now that the cluster has been created it will be possible to create a T0 gateway to be used for external connectivity.

From the Manager console navigate toΒ  Networking-> Tier-0 Gateways .

Select + ADD GATEWAY -> Tier-0

Enter the name of the new T0 gateway, the HA mode and associate the newly created Edge Cluster. It is not a mandatory field but necessary if we want to connect our segments to the physical world. Through the Edges it will be possible to define interfaces with which to allow the T0 to do BGP peering with external ToRs.

After creating the T0 with the SAVE button, we can close the editing by selecting NO.

A few seconds and the new T0 will be ready for use!

Posted in nsx, vmug | Tagged , | Comments Off on Create an NSX Edge cluster

Install NSX Edges

Core components of NSX are Edges that provide functionality such as routing and connecting to the outside world, NAT services, VPN, and more.

Let’s briefly see the requirements necessary for their installation.

Appliance SizeMemoryvCPUDisk SpaceNotes
NSX Edge Small4GB2200GBlab and proof-of-concept deployments
NSX Edge Medium8GB4200GBSuitable for NAT, routing, L4 firewall and throughput less than 2 Gbps
NSX Edge Large32GB8200GBSuitable for NAT, routing, L4 firewall, load balancer and throughput up to 10 Gbps
NSX Edge Extra Large64GB16200GBSuitable when the total throughput required is multiple Gbps for L7 load balancer and VPN

As can be understood from the table, it is necessary to know in advance the services that will be configured on the edges and the total traffic throughput.

For production environments it is necessary to use at least the size Medium.

NSX Edge is only supported on ESXi with Intel and AMD processors (this is to support DPDK)

If EVC is used, the minimum supported generation is Haswell.

Having made the appropriate sizing considerations, you can proceed to install the Edge.

From the Manager console, move to System -> Fabric -> Nodes -> Edge Transport Nodes.

Select + ADD EDGE NODE

Insert the necessary information to complete the wizard.

For the lab the small version is sufficient, remember to verify that the FQDN was created as a record on DNS.

Enter the credentials of the admin, root, and audit user, if used.

In this case I have enabled the flags that allow admin and root access via SSH, this in order to be able to perform direct checks on the edge.

Select the compute manager to which the Edge will be deployed. Indicate the cluster and all the necessary information.

Enter information about the Management Network configurations, this is the network under which NSX Manager will configure and manage the Edge. The IP address must match the FQDN entered on the first page of the wizard.

As a last configuration, it is necessary to indicate which Transport Zone will be associated with the virtual switch to which the edge is connected. Specify the uplink profile, the TEP address assignment method, and the interface/portgroup to associate with the uplink.

This is the last page of the wizard, if all the information has been entered correctly the deployment of the edge will begin.

In the same way it is possible to create other edges to be used later for the creation of an Edge Cluster.

Posted in nsx, vmug | Tagged , | Comments Off on Install NSX Edges

NSX Create transport zones

We continue our journey on NSX adding other pieces to our installation.

Let’s define the Transport Zones to which we are going to connect the transport nodes and edges. Normally two Transport Zones are defined.

Connect to the Manager console and move to System -> Fabric -> Transport Zones.

Select + ADD ZONE and create an Overlay transport zone.

In the same way we also create a transport zone of type VLAN.

In the summary we will now have our two Transport Zones.

Proceed to create uplink profiles to be used on transport nodes and edges.

Go to System -> Fabric -> Profiles, select + to add new profiles.

Enter the name of the profile and complete the Teamings section specifying the Teaming Policy and Uplinks.

This is the profile for transport nodes (ESXi)

By default the Teaming Policy Failover is proposed, I have specified the name of the two uplinks (they will be used later in the configuration of the transport nodes).

Add the VLAN that will be used to transport the overlay traffic.

Also create the profile for the edges.

In this case I have not configured the standby uplink and the transport VLAN, the edges are virtual appliances connected to port groups. The redundancy and tagging of the VLANs is delegated to the VDS.

Define IP Pools to be used for VTEP assignment to edges and transport nodes.

Go to Networking -> IP Management -> IP Address Pools and select ADD IP ADDRESS POOL.

Specify the name and define the subnet.

In the same way we configure the IP pool to be used for the edges, of course it will use a different subnet.

NOTE: the two newly configured subnets must be routed between them and allow an MTU of at least 1600. This is to allow connectivity between transport nodes and edges.

We now have all the elements to configure Edges and Transport nodes πŸ™‚

Posted in nsx, vmug | Tagged , | Comments Off on NSX Create transport zones

NSX finalize the installation

First login done!

Let’s complete some basic configurations now.

First let’s load the licenses, the NSX-T has a limited functionality license by default.

Let’s install a valid license to enable the features we are interested in.

It is possible to use a 60-day evaluation license, you can request it at this link.

Go to System -> Appliances

For production environments it is recommended to deploy 2 other managers to form a management cluster, I leave you to try the wizard to add them (it is not necessary to do it from the vcenter). It is possible to configure a virtual IP that will always be assigned to the master node.

We can see some messages indicating that a compute manager has not yet been configured and not even the configurations backup.

The compute manager is the vcenter that manages the esxi nodes that will be prepared as transport nodes.

Click on COMPUTE MANAGER and then on + ADD COMPUTE MANAGER

accept the vcenter thumbprint

wait for the vcenter to register successfully with the manager

now we can see the hosts by going under System -> Fabric -> Nodes, select the newly added vcenter on Managed By and the nodes will appear.

It remains only to configure the backup! Let’s go to System -> Backup & Restore

Currently the only supported mode is the copy via SFTP, please enter the necessary parameters.

Of course, it is possible to schedule the backup process according to your needs.

In case of loss or corruption of the manager it will be possible to re-deploy the appliance passing the backup path for a quick restore.

This completes the basic configuration of our manager πŸ™‚

Posted in nsx, vmug | Tagged , | Comments Off on NSX finalize the installation

NSX Manager Installation

Now let’s see the NSX Manager installation, if you have checked all the prerequisites this is the simplest part πŸ™‚

First download the OVA from your VMware account, to date the latest release available is 3.1.3.3

Move now to the vcenter and let’s deploy the OVA

Select the OVA just downloaded

Set the VM name of the manager and the target datacenter

Select the destination cluster

NOTE: for PoC or collapased clusters we can install NSX manager on the same cluster that we will later prepare for NSX-T, for production infrastructures it is advisable to use a cluster dedicated to management.

A detail of the template configurations is shown

Select the size of our manager, for the lab we will use the Small but for production environments it is advisable to use at least the Medium.

Select the datastore

Connect the portgroup to the manager’s nic

Enter the passwords of the users used by the appliance and the network parameters:

user root

user admin

user audit

a password for internal use is also requested

hostname use the appropriate FQDN

Rolename NSX Manager (the NSX Global Manager role is for federation only)

Management ip address, netmask and default GW

DNS addresses and Domain search list

NTP addresses

Activating SSH (useful for troubleshooting)

Any access permissions for SSH to the root user

NOTE: the Internal Properties should not be touched

If you have entered all parameters correctly, you will be presented with a summary window

Now it remains only to wait for the deployment to complete

You will then be able to connect to the manager to complete the configurations.

 

Posted in nsx, vmug | Tagged , | Comments Off on NSX Manager Installation

NSX-T Ports and Protocols

In most cases, the infrastructure components (vcenter, esxi, nsx, etc.) reside on the same network. Then they communicate with each other within the same subnet without having to cross routers and firewalls.

Normally this is the case … but if you operate within large enterprise it is likely that there are management clusters that manage other clusters dedicated only to workload.

These clusters can reside on networks separated by different Layer3 (router) or Layer 4-7 (firewall) devices. In this case it is necessary to communicate with network specialists to ensure full visibility between all the objects of the infrastructure.

You will hardly have a visibility of the type any any permit , you are more likely to have to provide a detailed list with source and destination addresses and ports TPC / UDP on which to allow access.

Have you ever had to provide this list of addresses and ports? To me yes and I can assure you that it is not as simple as it seems, the objects to communicate are many and on different services, forgetting some rules can result in a lot of time spent in troubleshooting πŸ™

Things have changed over the years and now it is no longer necessary to search the installation manuals for lists with all the necessary ports.

Thanks to vmware who realized this very useful website VMware Ports and Protocols πŸ™‚

Take some time to browse through all the vmware products, you will realize how many services and ports are necessary to make the various solutions communicate.

But go ahead! In this article, we’ll just find out which ports are needed for NSX-T! From the homepage of the previous site we select NSX-T Data Center.

It is possible to apply filters and select the rules by version and specific object.

Once the filters have been applied, it is also possible to export the list in pdf and excel πŸ™‚

By filtering with version 3.1 and source Manager we obtain the list of rules needed by NSX Manager to communicate with all the objects it needs.

I summarize them below, the source is obviously the NSX Managers ip:

Destination
ProtocolPortService Description
External LDAP server
TCP389,636Active Directory/LDAP
NSX ManagerTCP9000, 5671, 1234, 443, 8080, 1235, 9040Distributed Datastore
Install-upgrade HTTP repository
NSX messaging
Distributed Datastore
KVM and ESXi host
TCP443
Management and provisioning connection
vCenter Server
TCP443NSX Manager to compute manager
Traceroute DestinationUDP33434-33523Traceroute (Troubleshooting)
Intermediate and Root CA ServersTCP80Certificate Revocation Lists (CRLs)
Syslog Servers
TCP/UDP6514, 514Syslog
SNMP Servers
TCP/UDP161, 162
SNMP
NTP Servers
UDP123NTP
Management SCP Servers
TCP22SSH (upload support bundle, backups, etc.)
DNS Servers
TCP/UDP53DNS
Public Cloud Gateway (PCG)
TCP443NSX RPC channel(s)
github.com
TCP443Download IDS Signature from Trustwave Signature Repository.

Same applies for Transport nodes and ESXi

Destination
ProtocolPortService Description
Intermediate and Root CA servers
TCP80Certificate Revocation Lists (CRLs)
NSX ManagerTCP1234, 8080, 1235, 5671, 443NSX Messaging channel
Install and upgrade HTTP repository
Management and provisioning connection
Syslog Servers
TCP/UDP6514, 514Syslog
NSX-T Data Center transport node
UDP3784, 3785
BFD Session between TEPs, in the datapath using TEP interface
GENEVE Termination End Point (TEP)UDP6081Transport network

NOTE : these are the ports needed by NSX, ESXi hosts will clearly need other ports for normal operation (NPT, DNS, SSH, etc.)

Some rules may seem redundant but remember that you have to distinguish between objects that start the session and their destinations, sometimes you need rules that allow traffic on both sides on the same ports.

For version 3.1 the list is of about 50 services, have a look … to know them allow you to save time πŸ™‚

Posted in firewall, nsx | Tagged , | Comments Off on NSX-T Ports and Protocols

NSX-T Manager installation requirements

Crazy to get your hands on NSX-T? Want to browse through all of the Manager’s web console entries? Do you prefer learning by doing?

If you are like me I think so πŸ™‚

However, experience has taught me that starting directly with the installation, skipping checks on requirements and proper sizing, sooner or later inevitably leads to having to reinstall everything πŸ™

Don’t worry, the NSX-T Data Center guide provides Workflows for every type of installation, this one for vSphere:

  1. Review the NSX Manager installation requirements.
  2. Configure the necessary ports and protocols.
  3. Install the NSX Manager.
  4. Log in to the newly created NSX Manager.
  5. Configure a compute manager.
  6. Deploy additional NSX Manager nodes to form a cluster.
  7. Create transport zones.
  8. Review the NSX Edge installation requirements.
  9. Install NSX Edges.
  10. Create an NSX Edge cluster.
  11. Create host transport nodes.

This article goes into the first point, in succession I will publish the articles of all the other points πŸ™‚ The version of NSX-T of reference is the 3.1.

The information reported is from NSX-T Data Center Installation Guide, paragraph System Requirements.

ESXi versions supported for the Host Transport Nodes role

For more details see directly the Product Interoperability Matrices

Minimum CPU and RAM requirements for the Transport Node profile

HypervisorCPU CoresMemory
vSphere ESXi416GB

Nota : HW requirements are the same also for KVM on Linux

Network Requirements

Each Host Transport Node must be equipped with NIC supported by the installed version of ESXi, there is no mention of minimum speed but it is good practice to use at least 2 10G NICs for performance and redundancy.

Note : 2 NICs are enough but 4 can facilitate the migration from VSS/VDS to the new NVDS. I will deepen this aspect in a next article.

Requirements for NSX Manager

The Manager is the heart of NSX, as well as allowing the management of all configurations also incorporates the role of controller, these are the requirements for the deployment.

Appliance SizeMemoryvCPUDisk SpaceNote
NSX Manager Extra Small8GB2300GBonly for the Cloud Service Manager (da NSX-T 3.0)
NSX Manager Small VM16GB4300GBlab and proof-of-concept deployments (da NSX-T 2.5.1)
NSX Manager Medium VM24GB6300GBtypical production environments - max 64 hypervisor
NSX Manager Large VM48GB12300GBlarge-scale deployments - more than 64 hypervisor

Network latency requirements

Maximum latency between Managers of a cluster : 10ms.

Maximum latency between Managers and Transport Nodes: 150ms.

Perhaps obvious but often underestimated data, especially on geographically distributed installations.

Addressing requirements and DNS configurations

An NSX installation comprises a cluster of 3 NSX Managers and an Edge cluster with at least 2 nodes. This for production environments, for PoCs or demos, the high reliability of the components can be overlooked.

IP addresses must be provided for each object and relative FQDNs on the relevant DNS zones.

Note : in addition to the A records, it is important to provide for reverse lookup zones, the lack of these records can introduce problems of reachability and communication between the various components.

If the GENEVE overlay is also present, a subnet (IP pool) must be allocated for the TEPs to be assigned to the Transport Nodes. Also on this issue we will have the opportunity to deepen πŸ™‚

Total Sizing

And finally, let’s sum it up! How many resources should be allocated for a typical NSX-T production installation?

vCPUMemoryStorage
NSX Manager Medium624GB300GB
Total x 3 Manager1872GB900GB
NSX Edge Medium48GB200GB
Total x 2 Edge816GB400GB
Total resources2688GB1.3 TB

The resources required by NSX-T are not negligible, on small infrastructures they can have a strong impact. On environments of a certain entity NSX Manager is installed on a dedicated Management cluster together with other objects such as vRealize Log Insight and vRealize Network Insight.

Knowing the NSX-T requirements and doing a correct sizing is certainly the first place to start for a correct installation.

I conclude the article leaving you with a small task, check if your infrastructure has the resources to install NSX! Don’t you have them? Then design a new Management cluster πŸ™‚

Posted in nsx, vmug | Tagged , , | Comments Off on NSX-T Manager installation requirements

NSX … A journey of a thousand miles begins with a single step

Title a bit obvious perhaps, but I think very appropriate.

It has been a while now that I have been looking for the right way to start a series of articles on NSX, every time I find an inspiration I realize that there is always something before it should be explained and told. It’s a bit like taking something apart to understand how it works… you stop only when you have removed the last screw and you have on the table dozens of small pieces πŸ™‚Β  The same thing is true for NSX, where do you start to understand and take ownership of this technology? What do you need to know first? What are the basic building blocks that make up NSX?

Let’s start with these simple questions to chart a roadmap that I hope will help those approaching NSX.

What is NSX?
Why not ask vmware directly? This is the link where to go to look around :

VMware NSX Data Center

It is certainly an overview page, but for this reason it has been designed to highlight the strengths of NSX:

– Agility Through Automation
– Consistent Multi-Cloud Operations
– Intrinsic security
– Save on Both CapEx and OpEx

Ok, the last one is very commercial … but you will understand that it is the consequence of consolidating network and security functions into a single distributed virtualization platform!

Probably someone was expecting definitions like SDN or similar, surprised to find topics like multi-cloud and automation?

And here we go a little more in detail:

vmware-nsx-solution-brief

In the datasheet we find the key features and a table of functionalities divided by license type

vmware-nsx-datasheet

NOTE: many Companies choose NSX mainly for micro-segmentation … attention that Distributed Firewalling is included only starting from Professional!

But it is not over yet … there are some aspects that need to be explored before jumping into the first installation!

We find clear guidance right in the ICM course( Install, Configure and Manage ) of NSX Data Center :

Prerequisites:

Solid understanding of concepts presented in the following courses:
VMware Data Center Virtualization Fundamentals
VMware Introduction to Network Virtualization with NSX
VMware Network Virtualization Fundamentals
Network and Security Architecture with NSX ( I added this one πŸ™‚ )

These are all free courses that you can access with a basic subscription to vmware learning zone!

– Good understanding of TCP/IP services and network security and working experience with firewalls
– Working experience of enterprise switching and routing

These last two points are perhaps the most challenging for those who have no networking experience but if you have read this far you probably don’t lack interest!

NSX is a complete and complex product and a few quick reads are not enough to understand it in depth.

In my experience I can say that working on Linux environments and networking courses (CCNA like) can make learning NSX easier.

I leave you all these hints and see you at the next article!

Posted in nsx | Tagged , , , | Comments Off on NSX … A journey of a thousand miles begins with a single step

VMware Livefire

What is a livefire?

This term is often associated with military training. These are exercises where participants can live an experience very similar to a real fight.

In the technical field we can compare it to a training event where you operate in real environments, supported and led by an experienced instructor.

We associate VMware to Livefire and we immediately understand what we are talking about.

VMware Livefire is launched in early 2012 as an important tool to support the community of VMware partners. Yes, it is a tool designed for partners and not for end customers. Participation is by direct invitation only from the Livefire team so if you are a VMware partner and want to participate in a Livefire course, you just have to ask your VMware partner manager for information.

This year I had the opportunity to participate in two Livefire courses:

  • NSX-V Livefire – Architecture and Design
  • NSX-T Livefire – Next Generation Cloud Networking

Two very interesting but very different courses, the first (NSX-V) focused more on architecture and design, the second (NSX-T) much more technical and provided with laboratories in the Hands-on Lab style.

You can consult the course catalogue here, there’s something for everyone!

If you look at the course datasheets you’ll find that you need precise prerequisites, normally a VCP level certification.

The courses are undoubtedly challenging and require some experience (nothing to do with a normal install configure and manage course) but I can assure you that if you want to test yourself and meet international professionals and experts this is the experience for you!

With this article I would like to start sharing my experience in the NSX field and tell you about this fascinating technology that continues to evolve day by day.

Scanda

Posted in livefire, nsx, vmug | Tagged , , , | Comments Off on VMware Livefire