思科徽标

CISCO Linux KVM Nexus 仪表板

CISCO-Linux-KVM-Nexus-Dashboard-product

规格

  • libvirt version: 4.5.0-23.el7_7.1.x86_64
  • Nexus 仪表板版本:8.0.0

先决条件和指南

Before you proceed with deploying the Nexus Dashboard cluster in Linux KVM, you must:

  • Ensure that the KVM form factor supports your scale and services requirements.
  • Scale and services support and co-hosting vary based on the cluster form factor. You can use the Nexus Dashboard Capacity Planning tool to verify that the virtual form factor satisfies your deployment requirements.
  • Review and complete the general prerequisites described in Prerequisites: Nexus Dashboard.
  • Review and complete any additional prerequisites described in the Release Notes for the services you plan to deploy.
  • Ensure that the CPU family used for the Nexus Dashboard VMs supports AVX instruction set.
  • Ensure you have enough system resources:

桌子 1: Deployment Requirements

要求

  • KVM deployments are supported for Nexus Dashboard Fabric Controller services only.
  • You must deploy in CentOS 7.9 or Red Hat Enterprise Linux 8.6
  • You must have the supported versions of Kernel and KVM:
  • For CentOS 7.9, Kernel version 3.10.0-957.el7.x86_64 and KVM version
  • libvirt-4.5.0-23.el7_7.1.x86_64
  • For RHEL 8.6, Kernel version 4.18.0-372.9.1.el8.x86_64 and KVM version libvert 8.0.0
  • 16 个 vCPU
  • 64 GB RAM
  • 550 GB disk
  • Each node requires a dedicated disk partition
  • The disk must have I/O latency of 20ms or less.

To verify the I/O latency:

  1. Create a test directory.
    例如ample, test-data.
  2. 运行以下命令:
    # fio –rw=write –ioengine=sync –fdatasync=1 –directory=test-data –size=22m –bs=2300 –name=mytest
  3. After the command is executed, confirm that the 99.00th=[<value>] in the
    fsync/fdatasync/sync_file_range 部分低于 20ms。
    • We recommend that each Nexus Dashboard node is deployed in a different KVM hypervisor.

在 Linux KVM 中部署 Nexus 仪表板

本节介绍如何在 Linux KVM 中部署 Cisco Nexus Dashboard 集群。

开始之前

Ensure that you meet the requirements and guidelines described in Prerequisites and Guidelines, on page 1.

程序

1
Download the Cisco Nexus Dashboard image.

步骤 2
Copy the image to the Linux KVM servers where you will host the nodes.
You can use scp to copy the image, for example: # scp nd-dk9.<version>.qcow2 root@<kvm-host-ip>:/home/nd-base
以下步骤假设您将图像复制到 /home/nd-base 目录中。

步骤 3
Create the required disk images for the first node.
You will create a snapshot of the base qcow2 image you downloaded and use the snapshots as the disk images for the nodes’ VMs. You will also need to create a second disk image for each node.

  • Log in to your KVM host as the root user.
  • 为节点的快照创建一个目录。
    以下步骤假设您在 /home/nd-node1 目录中创建快照。
    # mkdir -p /home/nd-node1/
    # cd /home/nd-node1
  • Create the snapshot.
    In the following command, replace /home/nd-base/nd-dk9.<version>.qcow2with the location of the base image you created in the previous step.
    # qemu-img create -f qcow2 -b /home/nd-base/nd-dk9.<version>.qcow2
    /home/nd-node1/nd-node1-disk1.qcow2

笔记
If you are deploying in RHEL 8.6, you may need to provide an additional parameter to define the destination snapshot’s format as well. In that case, update the above command to the following:
# qemu-img create -f qcow2 -b /home/nd-base/nd-dk9.2.1.1a.qcow2 /home/nd-node1/nd-node1-disk1.qcow2- F qcow2

  • Create the additional disk image for the node.
    Each node requires two disks: a snapshot of the base Nexus Dashboard qcow2 image and a second 500GB disk. # qemu-img create -f qcow2 /home/nd-node1/nd-node1-disk2.qcow2 500G

步骤 4
重复上一步,为第二和第三个节点创建磁盘映像。在继续下一步之前,您应该具备以下条件:
For the first node, /home/nd-node1/ directory with two disk images:

  • /home/nd-node1/nd-node1-disk1.qcow2, which is a snapshot of the base qcow2 image you downloaded in Step 1.
  • /home/nd-node1/nd-node1-disk2.qcow2, which is a new 500GB disk you created.
  • For the second node, /home/nd-node2/ directory with two disk images:
  • /home/nd-node2/nd-node2-disk1.qcow2, which is a snapshot of the base qcow2 image you downloaded in Step 1.
  • /home/nd-node2/nd-node2-disk2.qcow2, which is a new 500GB disk you created.
  • For the third node, /home/nd-node3/ directory with two disk images:
  • /home/nd-node1/nd-node3-disk1.qcow2, which is a snapshot of the base qcow2 image you downloaded in Step 1.
  • /home/nd-node1/nd-node3-disk2.qcow2, which is a new 500GB disk you created.

步骤 5
Create the first node’s VM.

  • Open the KVM console and click New Virtual Machine.
    You can open the KVM console from the command line using the virt-manager command.
    If your Linux KVM environment does not have a desktop GUI, run the following command instead and proceed to step 6.
    virt-install –import –name <node-name> –memory 65536 –vcpus 16 –os-type generic –disk path=/path/to/disk1/nd-node1-d1.qcow2,format=qcow2,bus=virtio –disk
    path=/path/to/disk2/nd-node1-d2.qcow2,format=qcow2,bus=virtio –network
    bridge=<mgmt-bridge-name>,model=virtio –network bridge=<data-bridge-name>,model=virtio –console pty,target_type=serial –noautoconsole –autostart
  • In the New VM screen, choose Import existing disk image option and click Forward.
  • In the Provide existing storage path field, click Browse and select the nd-node1-disk1.qcow2 file.
    我们建议每个节点的磁盘映像都存储在其自己的磁盘分区上。
  • Choose Generic for the OS type and Version, then click Forward.
  • Specify 64GB memory and 16 CPUs, then click Forward.
  • Enter the Name of the virtual machine, for example nd-node1 and check the Customize configuration before install option. Then click Finish.

笔记
You must select the Customize configuration before install checkbox to be able to make the disk and network card customizations required for the node.
VM 详细信息窗口将会打开。

In the VM details window, change the NIC’s device model:

  • Select NIC <mac>.
  • For Device model, choose e1000.
  • For Network Source, choose the bridge device and provide the name of the “mgmt” bridge. Note

创建网桥设备不在本指南的讨论范围内,具体操作取决于操作系统的发行版和版本。有关更多信息,请参阅操作系统的文档,例如 Red Hat 的“配置网桥”。

在 VM 详细信息窗口中,添加第二个 NIC:

  • 单击添加硬件。
  • In the Add New Virtual Hardware screen, select Network.
  • For Network Source, choose the bridge device and provide the name of the created “data” bridge.
  • Leave the default Mac address value.
  • For Device model, choose e1000.

In the VM details window, add the second disk image

  • 单击添加硬件。
  • In the Add New Virtual Hardware screen, select Storage.
  • For the disk’s bus driver, choose IDE.
  • Select Select or create custom storage, click Manage, and select the nd-node1-disk2.qcow2 file 你創建的。
  • Click Finish to add the second disk.

笔记
Ensure that you enable the Copy host CPU configuration option in the Virtual Machine Manager UI.
最后,单击“开始安装”以完成节点的 VM 创建。

步骤 6
重复上述步骤部署第二和第三个节点,然后启动所有虚拟机。

笔记
If you are deploying a single-node cluster, you can skip this step.

步骤 7
Open one of the node’s console and configure the node’s basic information. If your Linux KVM environment does not have a desktop GUI, run the virsh console <node-name> command to access the console of the node.

  • Press any key to begin initial setup.
    • 系统将提示您运行首次安装实用程序:
    • [ OK ] Started atomix-boot-setup.
    • Starting Initial cloud-init job (pre-networking)…
    • Starting logrotate…
    • Starting logwatch…
    • Starting keyhole…
    • [ OK ] Started keyhole.
    • [ OK ] Started logrotate.
    • [ OK ] Started logwatch.
    • 按任意键在此控制台上运行首次启动设置...
  • Enter and confirm the admin password
    • 此密码将用于救援用户 SSH 登录以及初始 GUI 密码。
      笔记
      You must provide the same password for all nodes or the cluster creation will fail.
    • 管理员密码:
    • Reenter Admin Password:
    • Enter the management network information.
    • Management Network:
    • IP Address/Mask: 192.168.9.172/24
    • 网关:192.168.9.1
  • For the first node only, designate it as the “Cluster Leader”.
    • 您将登录到集群领导节点以完成配置并完成集群创建。
    • 这是集群领导者吗?:y
  • Review 并确认输入的信息。
    • You will be asked if you want to change the entered information. If all the fields are correct, choose n to proceed.
    • If you want to change any of the entered information, enter y to re-start the basic configuration script.
    • 请重新view 配置
    • Management network:
    • 网关:192.168.9.1
    • IP Address/Mask: 192.168.9.172/24
    • Cluster leader: yes
    • 重新输入配置?(是/否):n

步骤 8 重复上一步,配置第二、第三个节点的初始信息。
您无需等待第一个节点配置完成,您可以同时开始配置另外两个节点。

笔记
You must provide the same password for all nodes or the cluster creation will fail.
部署第二和第三个节点的步骤相同,唯一的例外是您必须表明它们不是集群领导。

步骤 9 等待所有节点上的初始引导过程完成。
在您提供并确认管理网络信息后,第一个节点(集群领导)上的初始设置将配置网络并调出 UI,您将使用该 UI 添加另外两个节点并完成集群部署。
Please wait for system to boot: [#########################] 100%
System up, please wait for UI to be online.
系统界面上线,请登录https://192.168.9.172继续。

10 Open your browser and navigate to https://<node-mgmt-ip> to open the GUI.
其余配置流程在其中一个节点的 GUI 中进行。您可以选择已部署的任意一个节点来开始引导过程,而无需直接登录或配置其他两个节点。
输入您在上一步中提供的密码,然后单击“登录”

CISCO-Linux-KVM-Nexus-Dashboard- (1)

步骤 11
Provide the Cluster Details.
In the Cluster Details screen of the Cluster Bringup wizard, provide the following information:

CISCO-Linux-KVM-Nexus-Dashboard- (2)

  • Provide the Cluster Name for this Nexus Dashboard cluster.
    The cluster name must follow the RFC-1123 requirements.
  • (Optional) If you want to enable IPv6 functionality for the cluster, check the Enable IPv6 checkbox.
  • Click +Add DNS Provider to add one or more DNS servers.
    输入信息后,单击复选标记图标进行保存。
  • (Optional) Click +Add DNS Search Domain to add a search domain.

输入信息后,单击复选标记图标进行保存。

  • (Optional) If you want to enable NTP server authentication, enable the NTP Authentication checkbox and click Add NTP Key.
    In the additional fields, provide the following information:
    • NTP Key – a cryptographic key that is used to authenticate the NTP traffic between the Nexus Dashboard and the NTP server(s). You will define the NTP servers in the following step, and multiple NTP servers can use the same NTP key.
    • Key ID – each NTP key must be assigned a unique key ID, which is used to identify the appropriate key to use when verifying the NTP packet.
    • Auth Type – this release supports MD5, SHA, and AES128CMAC authentication types.
    • Choose whether this key is Trusted. Untrusted keys cannot be used for NTP authentication.

笔记
输入信息后,单击复选标记图标进行保存。
For the complete list of NTP authentication requirements and guidelines, see Prerequisites and Guidelines.

  • Click +Add NTP Host Name/IP Address to add one or more NTP servers.
    In the additional fields, provide the following information:
  • NTP Host – you must provide an IP address; fully qualified domain name (FQDN) are not supported.
  • Key ID – if you want to enable NTP authentication for this server, provide the key ID of the NTP key you defined in the previous step.
    If NTP authentication is disabled, this field is grayed out.
  • Choose whether this NTP server is Preferred.
    输入信息后,单击复选标记图标进行保存。

笔记
If the node into which you are logged in is configured with only an IPv4 address, but you have checked Enable IPv6 in a previous step and provided an IPv6 address for an NTP server, you will get the following validation error:CISCO-Linux-KVM-Nexus-Dashboard- (3)

This is because the node does not have an IPv6 address yet (you will provide it in the next step) and is unable to connect to an IPv6 address of the NTP server.
In this case, simply finish providing the other required information as described in the following steps and click Next to proceed to the next screen where you will provide IPv6 addresses for the nodes.
如果您想提供额外的 NTP 服务器,请再次单击 +添加 NTP 主机并重复此子步骤。

  • Provide a Proxy Server, then click Validate it.
    For clusters that do not have direct connectivity to Cisco cloud, we recommend configuring a proxy server to establish the connectivity. This allows you to mitigate risk from exposure to non-conformant hardware and software
    in your fabrics.
  • You can also choose to provide one or more IP addresses communication with which should skip proxy by clicking+Add Ignore Host.
    代理服务器必须具备以下条件 URL已启用:
  • 如果要跳过代理配置,请单击跳过代理。
  • (Optional) If your proxy server required authentication, enable Authentication required for Proxy, provide the login credentials, then click Validate.
  • (Optional) Expand the Advanced Settings category and change the settings if required.
    在高级设置下,您可以配置以下内容:
  • Provide custom App Network and Service Network.
    应用程序覆盖网络定义了在 Nexus 仪表板中运行的应用程序服务所使用的地址空间。该字段已预先填充默认值 172.17.0.1/16。
    服务网络是 Nexus Dashboard 及其进程使用的内部网络。该字段已预先填充默认值 100.80.0.0/16。
    如果您之前选中了启用 IPv6 选项,那么您还可以为应用程序和服务网络定义 IPv6 子网。
    应用程序和服务网络在本文档前面的先决条件和指南部分中进行了描述。
  • 单击“下一步”继续。

步骤 12
在节点详细信息屏幕中,更新第一个节点的信息。
You have defined the Management network and IP address for the node into which you are currently logged in during the initial node configuration in earlier steps, but you must also provide the Data network information for the node
before you can proceed with adding the other primary nodes and creating the cluster.CISCO-Linux-KVM-Nexus-Dashboard- (4) CISCO-Linux-KVM-Nexus-Dashboard- (5) CISCO-Linux-KVM-Nexus-Dashboard- (6)

  • Click the Edit button next to the first node.
    节点的序列号、管理网络信息和类型会自动填充,但您必须提供其他信息。
  • Provide the Name for the node.
    The node’s Name will be set as its hostname, so it must follow the RFC-1123 requirements.
  • From the Type dropdown, select Primary.
    集群的前 3 个节点必须设置为主节点。如果需要启用服务的共同托管和更高的规模,您将在后续步骤中添加辅助节点。
  • In the Data Network area, provide the node’s Data Network information.
    You must provide the data network IP address, netmask, and gateway. Optionally, you can also provide the VLAN ID for the network. For most deployments, you can leave the VLAN ID field blank.
    If you had enabled IPv6 functionality in a previous screen, you must also provide the IPv6 address, netmask, and gateway.
    笔记
    If you want to provide IPv6 information, you must do it during cluster bootstrap process. To change IP configuration later, you would need to redeploy the cluster.
    All nodes in the cluster must be configured with either only IPv4, only IPv6, or dual stack IPv4/IPv6.
  • (Optional) If your cluster is deployed in L3 HA mode, Enable BGP for the data network.
    BGP configuration is required for the Persistent IPs feature used by some services, such as Insights and Fabric Controller. This feature is described in more detail in Prerequisites and Guidelines and the “Persistent IP Addresses” sections of the Cisco Nexus Dashboard User Guide.
    笔记
    You can enable BGP at this time or in the Nexus Dashboard GUI after the cluster is deployed.
    If you choose to enable BGP, you must also provide the following information:
  • ASN (BGP Autonomous System Number) of this node.
    You can configure the same ASN for all nodes or a different ASN per node.
  • For pure IPv6, the Router ID of this node.
    The router ID must be an IPv4 address, for examp乐 1.1.1.1
  • BGP Peer Details, which includes the peer’s IPv4 or IPv6 address and peer’s ASN.
  • 单击“保存”以保存更改。

步骤 13
In the Node Details screen, click Add Node to add the second node to the cluster.
If you are deploying a single-node cluster, skip this step. CISCO-Linux-KVM-Nexus-Dashboard- (5)CISCO-Linux-KVM-Nexus-Dashboard- (6)

  • In the Deployment Details area, provide the Management IP Address and Password for the second node You defined the management network information and the password during the initial node configuration steps.
  • Click Validate to verify connectivity to the node.
    The node’s Serial Number and the Management Network information are automatically populated after connectivity is validated.
  • Provide the Name for the node.
  • From the Type dropdown, select Primary.
    集群的前 3 个节点必须设置为主节点。如果需要启用服务的共同托管和更高的规模,您将在后续步骤中添加辅助节点。
  • In the Data Network area, provide the node’s Data Network information.
    You must provide the data network IP address, netmask, and gateway. Optionally, you can also provide the VLAN ID for the network. For most deployments, you can leave the VLAN ID field blank.
    If you had enabled IPv6 functionality in a previous screen, you must also provide the IPv6 address, netmask, and gateway.
    笔记
    If you want to provide IPv6 information, you must do it during cluster bootstrap process. To change IP configuration later, you would need to redeploy the cluster.
    All nodes in the cluster must be configured with either only IPv4, only IPv6, or dual stack IPv4/IPv6.
  • (Optional) If your cluster is deployed in L3 HA mode, Enable BGP for the data network.
    BGP configuration is required for the Persistent IPs feature used by some services, such as Insights and Fabric Controller. This feature is described in more detail in Prerequisites and Guidelines and the “Persistent IP Addresses” sections of the Cisco Nexus Dashboard User Guide.
    笔记
    You can enable BGP at this time or in the Nexus Dashboard GUI after the cluster is deployed.
    If you choose to enable BGP, you must also provide the following information:
  • ASN (BGP Autonomous System Number) of this node.
    You can configure the same ASN for all nodes or a different ASN per node.
  • For pure IPv6, the Router ID of this node.
    The router ID must be an IPv4 address, for examp乐 1.1.1.1
  • BGP Peer Details, which includes the peer’s IPv4 or IPv6 address and peer’s ASN.
  • 单击“保存”以保存更改。
  • Repeat this step for the final (third) primary node of the cluster.

步骤 14
In the Node Details page, verify the provided information and click Next to continue.
CISCO-Linux-KVM-Nexus-Dashboard- (8)步骤 15
Choose the Deployment Mode for the cluster.

  • Choose the services you want to enable.
    在 3.1(1) 版本之前,您必须在初始集群部署完成后下载并安装各个服务。现在,您可以选择在初始安装期间启用这些服务。
    笔记
    Depending on the number of nodes in the cluster, some services or cohosting scenarios may not be supported. If you are unable to choose the desired number of services, click Back and ensure that you have provided enough secondary nodes in the previous step.
  • Click Add Persistent Service IPs/Pools to provide one or more persistent IPs required by Insights or Fabric Controller services.
    有关持久 IP 的更多信息,请参阅先决条件和指南部分。
  • 单击下一步继续。

步骤 16
在摘要屏幕中,重新view 并核对配置信息后点击保存,集群搭建完成。
在节点引导和集群启动过程中,整体进度以及每个节点的单独进度都会显示在 UI 中。如果您没有看到引导进度的进展,请手动刷新浏览器中的页面以更新状态。
集群组建和所有服务启动最多可能需要 30 分钟。集群配置完成后,页面将重新加载到 Nexus Dashboard GUI。

步骤 17
验证集群是否健康。
集群形成和所有服务启动可能需要长达 30 分钟。
After the cluster becomes available, you can access it by browsing to any one of your nodes’ management IP addresses.
The default password for the adminuser is the same as the rescue-userpassword you chose for the first node. During this time, the UI will display a banner at the top stating “Service Installation is in progress, Nexus Dashboard configuration tasks are currently disabled”:CISCO-Linux-KVM-Nexus-Dashboard- (3)

集群全部部署完毕,所有服务启动后,可以检查view 页面以确保集群健康:

CISCO-Linux-KVM-Nexus-Dashboard- (10)

或者,您可以使用节点部署期间提供的密码通过 SSH 以救援用户身份登录到任何一个节点,并使用 acs health 命令检查状态:

  • While the cluster is converging, you may see the following outputs:
    • $ acs健康
  • k8s 安装正在进行中
    • $ acs健康
  • k8s services not in desired state – […]
    • $ acs健康
    • k8s: Etcd cluster is not ready
  • When the cluster is up and running, the following output will be displayed:
    • $ acs健康
  • All components are healthy

笔记
In some situations, you might power cycle a node (power it off and then back on) and find it stuck in this stage:
deploy base system services
This is due to an issue with etcd on the node after a reboot of the pND (Physical Nexus Dashboard) cluster. To resolve the issue, enter the acs reboot clean command on the affected node.

步骤 18
部署 Nexus 仪表板和服务后,您可以按照其配置和操作文章中的说明配置每个服务。

  • For Fabric Controller, see the NDFC persona configuration white paper and documentation library.
  • For Orchestrator, see the documentation page.
  • For Insights, see the documentation library.

常问问题

What are the deployment requirements for Nexus Dashboard in Linux KVM?

The deployment requires libvirt version 4.5.0-23.el7_7.1.x86_64 and Nexus Dashboard version 8.0.0.

How can I verify I/O latency for the deployment?

To verify I/O latency, create a test directory, run the specified command using fio, and confirm that the latency is below 20ms.

文件/资源

CISCO Linux KVM Nexus 仪表板 [pdf] 指示
Linux KVM Nexus 仪表板、KVM Nexus 仪表板、Nexus 仪表板

参考

发表评论

您的电子邮件地址不会被公开。 必填字段已标记 *