tag:blogger.com,1999:blog-5555763225747733332024-03-06T07:30:04.566+02:00Sysadmin StoriesShort stories on virtualization, clouds and other technologies based on personal experiences and opinions. razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.comBlogger110125tag:blogger.com,1999:blog-555576322574773333.post-45280665635380822712023-07-09T14:46:00.003+03:002023-07-10T14:35:17.924+03:00Veeam Backup for Google Cloud - Zero Trust Architecture with Cloud Identity-Aware Proxy<p>Having security embedded by design into your architecture is more than just a best practice. It is how any one should actually start their work in any kind of project in public, private or hybrid cloud. Veeam Backup for Google Cloud (VBG) is one of the technologies that enables data security and resiliency by backing up and protecting your data running in the cloud. However, VBG is also residing in the same cloud and one of the first things is to make sure it is deployed and accessed in a secure manner. </p><p>The challenge rises from the need to access VBG console for configuration and operation activities. The focus of this post is securing this access. </p><p>In a standard deployment you would have your VBG appliance installed in a VPC, apply firewall rules to restrict access to VBG and then using an SSL encrypted web browser connect to the console. This connectivity can be done over Internet or in some more complex scenarios over VPN or interconnect links. If you are connecting to VBG over Internet, you would need to expose VBG using a public IP address and restrict access to that IP address from your source IP. This is the use case that we are treating in our article. Another scenario using bastion servers and private connectivity is not treated now, however principals and mechanisms learned here can still apply. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj70mL9L3h4-PQS2ZyM9f6rloZjRXdWIfVWrbkFOSjuJmB7Ui2zs947TZCbKLoM79szygohfVS_iZwy9kX_n9VfwFS8tmv3vVfcoKixnArAiR_ROqndfNmWMsXtTmQ2Q-Z3kKJe8DYx5WqroktsYR5bzpW8obYvzIa_YwtprM25LGXCzESHfo9kRvd7AY0/s719/vbg-access-internet.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="490" data-original-width="719" height="218" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj70mL9L3h4-PQS2ZyM9f6rloZjRXdWIfVWrbkFOSjuJmB7Ui2zs947TZCbKLoM79szygohfVS_iZwy9kX_n9VfwFS8tmv3vVfcoKixnArAiR_ROqndfNmWMsXtTmQ2Q-Z3kKJe8DYx5WqroktsYR5bzpW8obYvzIa_YwtprM25LGXCzESHfo9kRvd7AY0/s320/vbg-access-internet.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><p>As you can easily see there are some disadvantages in the having VBG directly accessed from Internet. First, VBG is directly accessed from Internet. Having a firewall rule that limits source IP addresses allowed to connect to the external IP address of VBG increases the security trust, but it does not apply zero trust principles. We don't know who is hiding behind that allowed source IP address. There is no user identification and authorization in place before allowing the user to open a session to VBG console. Anyone connecting from that specific source IP address is automatically trusted.</p><p>How can we make sure that whoever or whatever trying to connect to VBG is actually allowed to do it? Please mind that we are talking about the connection to VBG console before any authentication and authorization into VBG is applied. <b>We want to make sure that whoever tries to enter credentials in VBG console is identified and has the permissions to do that action. </b></p><p>Think of use cases where your user has lost his rights to manage backups, however still has access to the backup infrastructure. You would want to have a secure and simple way of controlling that access and being able to easily revoke it. In this situation we can use Cloud Identity and Access Management (IAP) and Cloud Identity-Aware Proxy (IAP).</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBLX3dCvN-ph5swUcUQKy8FPQUaq6TNc1NEkfsQFnFSNMdyK3awZTWZJXZ3nGGnFIAG4Gu_nMkYu4pexr_Pp3J1n8GOIQtFKVNdNhrCOm67l5b_ygieCrSXAd_tSH80zJgFXHiza66GDZFq8wArhffCORbjEDD11IWyJ2dpkU0F8ZUYNfqGIPKuY4mLxo/s1051/vbg-access-iap-tunnel.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="491" data-original-width="1051" height="149" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBLX3dCvN-ph5swUcUQKy8FPQUaq6TNc1NEkfsQFnFSNMdyK3awZTWZJXZ3nGGnFIAG4Gu_nMkYu4pexr_Pp3J1n8GOIQtFKVNdNhrCOm67l5b_ygieCrSXAd_tSH80zJgFXHiza66GDZFq8wArhffCORbjEDD11IWyJ2dpkU0F8ZUYNfqGIPKuY4mLxo/s320/vbg-access-iap-tunnel.png" width="320" /></a></div><p><b>How does it work?</b></p><p>Cloud IAP implements TCP forwarding which encrypts any type of TCP traffic between the client initiating the session and IAP using HTTPS. In our case we normally connect to VBG console using HTTPS (web browser). Adding IAP TCP forwarding, the initial HTTPS traffic will be encrypted in another HTTPS connection. From IAP to VBG, the traffic will be sent without the additional layer of encryption. The purpose of using IAP is to keep VBG connected to private networks only and control which users can actually connect by using permissions and IAM users. </p><p>Public IP of VBG will be removed and if outbound connectivity is needed, then use a NAT gateway to enable it, but this is out of scope for the current post.</p><p>To summarize, instead of allowing anyone behind an IP address to connect to our VBG portal, we restrict this connectivity to specific IAM users. Additionally we keep VBG on a private network.</p><p><b>Guide</b></p><p>Start by preparing the project: enable Cloud Identity-Aware Proxy API. In the console </p><p></p><ul style="text-align: left;"><li><b>APIs & Services > Enable APIs and Services </b></li><li>search for <b>Cloud Identity-Aware Proxy API </b> and press enable.</li></ul>Once enabled you will see it displayed in the list APIs<br /><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifGBhlFdIlX_-Tfn4QWI0500hks1YZ_NXyoi48vVa33fZXi8QAUITWKSYTZUxUgtIBHzck-qxNK8Sotq-1b27PYjy8vGfVUaG-087CajB1A-yONd51hA1pJB1SwNInFCOwiA35jwjc6ztr4wA67BJTGkvTJI6FrZdtrLohx-O7han4Os7KmvmFN3xlCYo/s1421/gcp-enabled-apis-and-services.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="597" data-original-width="1421" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifGBhlFdIlX_-Tfn4QWI0500hks1YZ_NXyoi48vVa33fZXi8QAUITWKSYTZUxUgtIBHzck-qxNK8Sotq-1b27PYjy8vGfVUaG-087CajB1A-yONd51hA1pJB1SwNInFCOwiA35jwjc6ztr4wA67BJTGkvTJI6FrZdtrLohx-O7han4Os7KmvmFN3xlCYo/s320/gcp-enabled-apis-and-services.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">Allow IAP to connect to your VM by creating a firewall rule. In console go to <b>VPC network > Firewall </b>and press <b>Create Firewall Rule</b></div><div class="separator" style="clear: both; text-align: left;"><ul style="text-align: left;"><li>name: allow-ingress-from-iap</li><li>targets: <b>Specifed target tags</b> and select the tag of your VBG instance. We are using "vbg-europe" network tag. If you don't use network tags you can select <i>"All instances in the network"</i></li><li>source IPv4 ranges: Add the range 35.235.240.0/20 which contains all IP addresses that IAP uses for TCP forwarding.</li><li>protocols and ports - specify the port you want to access - TCP 443</li><li>press Save</li></ul></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyX-ykxHJ53deOLgmBauqqhsEe2qHJbZgUmYiygZSsNtlIM6K6CrVWZIH2K4yujVnib_J_69Cn6iqh1Z53DvGQnSMlVvhkXqiCDFtzpcJ73_On5t6ZgS0APc8QtMDBSSdnai3eTCCmZd_sgQtE8wB--w3zWhQ-c9RCAf_rL1jCLg3BMErsHTv6lOiW8Lw/s1305/gcp-firewall-rule-allow-ingress-iap.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="132" data-original-width="1305" height="32" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyX-ykxHJ53deOLgmBauqqhsEe2qHJbZgUmYiygZSsNtlIM6K6CrVWZIH2K4yujVnib_J_69Cn6iqh1Z53DvGQnSMlVvhkXqiCDFtzpcJ73_On5t6ZgS0APc8QtMDBSSdnai3eTCCmZd_sgQtE8wB--w3zWhQ-c9RCAf_rL1jCLg3BMErsHTv6lOiW8Lw/s320/gcp-firewall-rule-allow-ingress-iap.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><br /><div>Grant to users (or groups) permissions to use IAP TCP forwarding and specify to which instance to make it as restrictive as possible. Grant the <i>roles/iap.tunnelResourceAccessor</i> role to VBG instance by opening IAP admin page in console (<b>Security > Identity-Aware Proxy</b>). Go to <b>SSH and TCP Resources </b>page (you may ignore the OAuth warning).</div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifSKOcMuWg5Bv8LHb9JUHm3GVdHBNy9S7jJupDiv3YnnNG4FCIm_UgJczBgYUk2df5ffxfChrRordsZBmu5O6yHDJ9Qe-TZk25l3fYrOEOmm9a5Cqz4SOHwnF8LStaO_qQqBlHe_vk4h4jWkVMR9xjlvxnOCpJXA4JDt0zTIP1uK69YsMGuLlNMoLMD30/s1446/gcp-iap-grant-permissions-1.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="409" data-original-width="1446" height="91" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifSKOcMuWg5Bv8LHb9JUHm3GVdHBNy9S7jJupDiv3YnnNG4FCIm_UgJczBgYUk2df5ffxfChrRordsZBmu5O6yHDJ9Qe-TZk25l3fYrOEOmm9a5Cqz4SOHwnF8LStaO_qQqBlHe_vk4h4jWkVMR9xjlvxnOCpJXA4JDt0zTIP1uK69YsMGuLlNMoLMD30/s320/gcp-iap-grant-permissions-1.png" width="320" /></a></div><br /></div><div><b><br /></b>Select your VBG instance and press <b>Add principal. </b>Give to the IAM principal <b>IAP-Secured Tunnel User </b>role. You may want to restrict access to VBG to specific periods of time or days of the week. In this case add an IAM time based condition as seen in the example below.</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjY_Ugr1hAgi4l6DjCJsle7rhWXXe-sCMCLrSbUI0qF8oYlN8PhQrFP2ZtIIbsux3PB1eEWi1sscSJcj7j1iomGkjfWcu-9a2ZbmuXv_Oi-lJXId-QUPMUsbswCtrV8gOGPN5DXewyNixCfEJlKfLBe1v_OAkfyJI2_MtVnIohBNm1Q4kAOxsghSuZIyiU/s894/vbg-iam-time-based-condition.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="531" data-original-width="894" height="190" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjY_Ugr1hAgi4l6DjCJsle7rhWXXe-sCMCLrSbUI0qF8oYlN8PhQrFP2ZtIIbsux3PB1eEWi1sscSJcj7j1iomGkjfWcu-9a2ZbmuXv_Oi-lJXId-QUPMUsbswCtrV8gOGPN5DXewyNixCfEJlKfLBe1v_OAkfyJI2_MtVnIohBNm1Q4kAOxsghSuZIyiU/s320/vbg-iam-time-based-condition.png" width="320" /></a></div><div><br /></div>Save the configuration and now you are ready to connect to your isolated VBG instance. On the machine where you want to initialize the connection you would need to have gcloud cli installed (Cloud SDK). Run the following command to open a TCP forwarding tunnel to VBG instance on port 443.<div><br /></div><div><span style="font-family: courier;">gcloud compute start-iap-tunnel your-vbg-instance-name 443 --local-host-port=localhost:0 --zone=your-instance-zone</span></div><div><span style="font-family: courier;"><br /></span></div><div>When the tunnel is established you will see a message in the console with the local TCP port that is used for forwarding, similar to below image:</div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6yIlCqE9zpDwp1bHOwrCYzQfPjRiRi3WomR0_I4jVrj6z1dbih0GWOlbOxa2tz_XM3RAF01Qxt8rLCwWGMu9Sk-cjJjhRaL0a6uIgNy-rNYZIQGJtMIZR6iJhvZJYkIJS88mhprO-rH1ae6w5GHtpQ4bmeaC6-dSxTCmZnoL1qYiHe95RhgUiNsWxajs/s429/iap-tunnel-localhost.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="59" data-original-width="429" height="44" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6yIlCqE9zpDwp1bHOwrCYzQfPjRiRi3WomR0_I4jVrj6z1dbih0GWOlbOxa2tz_XM3RAF01Qxt8rLCwWGMu9Sk-cjJjhRaL0a6uIgNy-rNYZIQGJtMIZR6iJhvZJYkIJS88mhprO-rH1ae6w5GHtpQ4bmeaC6-dSxTCmZnoL1qYiHe95RhgUiNsWxajs/s320/iap-tunnel-localhost.png" width="320" /></a></div><br /></div><div><div>To be able to execute <span style="font-family: courier;">gcloud compute start-iap-tunnel </span>you need to have <span style="font-family: courier;">compute.instances.get </span>and <span style="font-family: courier;">compute.instances.list</span> permissions on the project where VBG instance runs. You may grant the permissions to the users or groups using a custom role.</div></div><div><br /></div><div>In case the user is not authorized in IAP or an IAM condition applied, then you will get the following message when trying to start the tunnel:</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtcGzmuoDCVRZBB5OKpCt2zwNETqEPI2dSYMKI8RkTWO_4gNCW1t3ki5qFGLfeyCpj3BRQSRLdtHxge_zxY9AjQiWQukY76gMlZ80U2nOLAHRBsdJtWDewPFs3Zr0gEWNwHBhgp8BEqouhsfHER55YKHmgUbm44lGh0xTNsmiiRad5cIc6WARtAdoWsm4/s961/iap-tunnel-localhost-not-authorized.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="69" data-original-width="961" height="23" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtcGzmuoDCVRZBB5OKpCt2zwNETqEPI2dSYMKI8RkTWO_4gNCW1t3ki5qFGLfeyCpj3BRQSRLdtHxge_zxY9AjQiWQukY76gMlZ80U2nOLAHRBsdJtWDewPFs3Zr0gEWNwHBhgp8BEqouhsfHER55YKHmgUbm44lGh0xTNsmiiRad5cIc6WARtAdoWsm4/s320/iap-tunnel-localhost-not-authorized.png" width="320" /></a></div><div><br /></div><div><br /></div><div>Finally it's time to open your browser, point it to the localhost and TCP port returned by gcloud command and connect to your VBG instance in the cloud: </div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhiVlPs6DMbngYainKNMbkYnF1kxJUun_wKeIkRhsT7TSH05YWm7-_gDxD4tsYjRY9EMM_d8RQmOo1fRrpscO5yAkFo95Kr12ChyHZMn-rhoM8YI5ldXYyS2Jdp5WPPurF8XKD1JvYvOyrqIRxp_0Yqb3keORpKvAQ4EU2ZAR_s5Uk8FppQ6mb9xq9ABH0/s1292/vbg-console-connect-localhost.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="702" data-original-width="1292" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhiVlPs6DMbngYainKNMbkYnF1kxJUun_wKeIkRhsT7TSH05YWm7-_gDxD4tsYjRY9EMM_d8RQmOo1fRrpscO5yAkFo95Kr12ChyHZMn-rhoM8YI5ldXYyS2Jdp5WPPurF8XKD1JvYvOyrqIRxp_0Yqb3keORpKvAQ4EU2ZAR_s5Uk8FppQ6mb9xq9ABH0/s320/vbg-console-connect-localhost.png" width="320" /></a></div><br /><div>The proposed solution is suitable for management and operations of VBG. However, please keep in mind that IAP TCP forwarding is not intended for bulk data data transfer. Also, IAP automatically disconnects sessions after one hour of inactivity. </div><div><br /></div><div>In this post we've seen how to use Cloud IAP and Cloud IAM to enable secure access to Veeam Backup for Google Cloud console using zero trust architecture principals.</div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-53212736347444284712023-05-03T08:00:00.002+03:002023-05-03T10:17:41.182+03:00Veeam Cloud Integrated Agent <p>Veeam Backup and Replication v12 brings a cloud integrated agent as part of its optimizations for hybrid cloud architectures. The agent enables application aware immutable backups for cloud workloads hosted in AWS and Microsoft Azure. It is deployed and managed through native cloud API without direct network connection to the protected workloads and it stores the backups directly on object storage. </p><p>Having the agent deployed inside the protected cloud workloads, Veeam enables the same application aware backup technology that it uses for on-premises workloads. This in turn unlocks granular recovery using <a href="https://helpcenter.veeam.com/docs/backup/explorers/explorers_introduction.html?ver=120" target="_blank">Veeam Explorers</a>.</p><p>Let's see the agent at work. We have an Ubuntu VM in Azure. The VM has only private connectivity (no public IP). There is also a PostgreSQL instance running on the VM that we want to protect it using application aware processing. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgB--6X0CWcacsLMKTimBk3Guh8NOBhY9MB79U1ccv016-5tBNtsp3awH260LntawQ1B0Niz7oE6Gpz-otbIH1IuRhaD3dfAw2Qh3SXsgf3W3GVVqwAj1DKkv7GGFPeYuqVBysl1QxOsXEW6Nwv3wJKwGqDm4PfnqTu9avZJVvHGxo6ITHKbwR3n9WL/s1115/cloud-integrated-agent-diagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="346" data-original-width="1115" height="99" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgB--6X0CWcacsLMKTimBk3Guh8NOBhY9MB79U1ccv016-5tBNtsp3awH260LntawQ1B0Niz7oE6Gpz-otbIH1IuRhaD3dfAw2Qh3SXsgf3W3GVVqwAj1DKkv7GGFPeYuqVBysl1QxOsXEW6Nwv3wJKwGqDm4PfnqTu9avZJVvHGxo6ITHKbwR3n9WL/s320/cloud-integrated-agent-diagram.png" width="320" /></a></div><div><br /></div>Veeam Cloud Message Service installed on the backup server communicates with Veeam Cloud Message Service installed on the protected cloud machines via a message queue. The message service on the cloud machines will in turn communicate with other local Veeam components - Transport Service, Veeam Agent. The backups are sent directly to a compatible object storage repository. <div><p>To start configuration, we need to create a protection group. In VBR console, from I<b>nventory > Physical Infrastructure > Create Protection Group</b></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinfy8rZ6CbcuwWPh2j-TcELtVV5JD3t-_3Nd3F7VBWBoDqSy0zLJLR3MYNBEtZL6sNvtnxVJaF3DNhmR-Zfh9xjvgAwOmG5OvLys0Loslxkjydj6LkzlBL5DK4KxOqpdakFzAu3KDPf8y7pKpjqNF_iItIhSmJzZf6Akm33hnP_zTS4GwY5oJfu_RZ/s754/create-protection-group-1.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="537" data-original-width="754" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinfy8rZ6CbcuwWPh2j-TcELtVV5JD3t-_3Nd3F7VBWBoDqSy0zLJLR3MYNBEtZL6sNvtnxVJaF3DNhmR-Zfh9xjvgAwOmG5OvLys0Loslxkjydj6LkzlBL5DK4KxOqpdakFzAu3KDPf8y7pKpjqNF_iItIhSmJzZf6Akm33hnP_zTS4GwY5oJfu_RZ/s320/create-protection-group-1.png" width="320" /></a></div><div><br /></div>Select "Cloud machines"<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjf1f0_HOIZ0xM3xXnbDLZr2yY2YVCa6PrFfreBCu7DSUp9RoGwD5nGymcLFb2X6LeaB5eMT7C7EfWCapBtj5xbCFJLppHwzaMtmwdPqUkAy54IuZzFdOFvKDGawe8NODYfxi2m0Vc4ajX72xdEyvUj0w6Pzy-giSBUSmWpgPDqLDmC_00tyQsHFXkm/s754/create-protection-group-2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="537" data-original-width="754" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjf1f0_HOIZ0xM3xXnbDLZr2yY2YVCa6PrFfreBCu7DSUp9RoGwD5nGymcLFb2X6LeaB5eMT7C7EfWCapBtj5xbCFJLppHwzaMtmwdPqUkAy54IuZzFdOFvKDGawe8NODYfxi2m0Vc4ajX72xdEyvUj0w6Pzy-giSBUSmWpgPDqLDmC_00tyQsHFXkm/s320/create-protection-group-2.png" width="320" /></a></div><div><br /></div>Add Azure credentials, subscription and region<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitYb1j80hMOx8slxW1raVlWHjAmkMWy9cZG0oeIog8el8yldSIXMpod-goQrJDtlBTO-WZgTTZWurYxYkWE9USq3XbxeaEaUYmsu2igeSjEKTVNXs1xqyM3pxR-nnV9AF1VVxiFtGJnf5HJMpLpYxU0dgJ4a8W06rUOLYwRKgXkmQQTgT8WrRwVAcH/s754/create-protection-group-3.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="537" data-original-width="754" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitYb1j80hMOx8slxW1raVlWHjAmkMWy9cZG0oeIog8el8yldSIXMpod-goQrJDtlBTO-WZgTTZWurYxYkWE9USq3XbxeaEaUYmsu2igeSjEKTVNXs1xqyM3pxR-nnV9AF1VVxiFtGJnf5HJMpLpYxU0dgJ4a8W06rUOLYwRKgXkmQQTgT8WrRwVAcH/s320/create-protection-group-3.png" width="320" /></a></div><div><br /></div>Select the workloads to protect - statically choosing the VMs or dynamically using tags<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmUM8cHkZUQXEI7xuJxRk2yc5rksR5HqYTT68spNwqSp4ikxE9tAj0NAOsepxKT3-KpcthFUtT3uQDXvu-34C9NItCdW3Cp6lPA_-39uOpEqKjXSI9MIpMU6Wy2FJ-o-QsePQQpqRoXTvnpGZsypw4USyAv0mkdR4M11fXSGyPoTNlZ4eZ7_L9EtVz/s754/create-protection-group-4.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="537" data-original-width="754" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmUM8cHkZUQXEI7xuJxRk2yc5rksR5HqYTT68spNwqSp4ikxE9tAj0NAOsepxKT3-KpcthFUtT3uQDXvu-34C9NItCdW3Cp6lPA_-39uOpEqKjXSI9MIpMU6Wy2FJ-o-QsePQQpqRoXTvnpGZsypw4USyAv0mkdR4M11fXSGyPoTNlZ4eZ7_L9EtVz/s320/create-protection-group-4.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoaOI1CKavo2V85nLAGvU945xPcFLhA02Eu23hZhEMJiy9FXSLn23SWj0wrjYkpP-iqhD-98Ue0vATKooXqHlxW0x-gZQH9IDRpICqEFWx0mc-MobcGS61UnF2kNsmRCtxYyWgtaAdu1vH-A_2GWD4qNm2RPVqkIbO1i48xVqCJjKbCnDu952R_gJl/s754/create-protection-group-5.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="537" data-original-width="754" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoaOI1CKavo2V85nLAGvU945xPcFLhA02Eu23hZhEMJiy9FXSLn23SWj0wrjYkpP-iqhD-98Ue0vATKooXqHlxW0x-gZQH9IDRpICqEFWx0mc-MobcGS61UnF2kNsmRCtxYyWgtaAdu1vH-A_2GWD4qNm2RPVqkIbO1i48xVqCJjKbCnDu952R_gJl/s320/create-protection-group-5.png" width="320" /></a></div><div><br /></div>Select to exclude objects (if required)<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRM7sUTRXR-zo9yl582LSafEFUD_y6zZpPpfovkUI1fdGrE8_WAy7J9WYXh0SVLqh-UFtYEU66Y_8vxMVgEiWhin7ac48WZjgwvvf6k9fyh-6vrLR8dy0tKxfh3rNbAdcGooO0aPW2Ov4IKRmGdpyuR9gaqgZFLIOK-ubx3Q-T2d5ERo8ljpS5nPAr/s754/create-protection-group-6.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="537" data-original-width="754" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRM7sUTRXR-zo9yl582LSafEFUD_y6zZpPpfovkUI1fdGrE8_WAy7J9WYXh0SVLqh-UFtYEU66Y_8vxMVgEiWhin7ac48WZjgwvvf6k9fyh-6vrLR8dy0tKxfh3rNbAdcGooO0aPW2Ov4IKRmGdpyuR9gaqgZFLIOK-ubx3Q-T2d5ERo8ljpS5nPAr/s320/create-protection-group-6.png" width="320" /></a></div><div><br /></div>Select Protection group settings - similar to the ones for a standard agent <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdhAwcmbcBV5rL2RpebnLywEObBnhfBFFLKFySDaSNYXNR02c6MQIKR-Kzz_U-VrjOMGmyHgKYMEFOkjawRcGwCxldWGAKRWZ1xTiK84NnWuiAGpPbMfY7yH_X0zDw6SRGNZJmdiovRQcsEDA6PRp1hEakxoDdly2nScLqzLMGSEvhHXzK9rFzV2XN/s754/create-protection-group-7.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="537" data-original-width="754" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdhAwcmbcBV5rL2RpebnLywEObBnhfBFFLKFySDaSNYXNR02c6MQIKR-Kzz_U-VrjOMGmyHgKYMEFOkjawRcGwCxldWGAKRWZ1xTiK84NnWuiAGpPbMfY7yH_X0zDw6SRGNZJmdiovRQcsEDA6PRp1hEakxoDdly2nScLqzLMGSEvhHXzK9rFzV2XN/s320/create-protection-group-7.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilzq4KvUdpNKBJbqh-fZKi7Afx0thXG0K9_YO6SMFAjFZodrELcUPJBYNALBYjz8eHH5cOlHFdJwaOfXn-j4U097myR-hmTIi-IQu-eRBcDubGA1wbFU-YJznnIgpzze2S37LFi0RPVlDzCV3RZrIyusguFxaBfweqV0fFX1iQ3_V5TfHrBws6HLch/s754/create-protection-group-8.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="537" data-original-width="754" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilzq4KvUdpNKBJbqh-fZKi7Afx0thXG0K9_YO6SMFAjFZodrELcUPJBYNALBYjz8eHH5cOlHFdJwaOfXn-j4U097myR-hmTIi-IQu-eRBcDubGA1wbFU-YJznnIgpzze2S37LFi0RPVlDzCV3RZrIyusguFxaBfweqV0fFX1iQ3_V5TfHrBws6HLch/s320/create-protection-group-8.png" width="320" /></a></div><div><br /></div>Finalize the protection group. <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxn1HCsXqlrWcbuYwtyYYInoc_kiMaYeod5wXNAExjrOOUL6oEbjsYkzX_L_NUts15pNAlgp_sFvWzPMq2ieHuov9orAE6AMRgpJ-P7HfgW23_y7MZ6cKuMwSleddkviBhwi1OKcZgMQZrtPU80IUWZ0uLjbJDPEVLlz_n3KoiIVNWYSnLzSOhHcSh/s754/create-protection-group-9.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="537" data-original-width="754" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxn1HCsXqlrWcbuYwtyYYInoc_kiMaYeod5wXNAExjrOOUL6oEbjsYkzX_L_NUts15pNAlgp_sFvWzPMq2ieHuov9orAE6AMRgpJ-P7HfgW23_y7MZ6cKuMwSleddkviBhwi1OKcZgMQZrtPU80IUWZ0uLjbJDPEVLlz_n3KoiIVNWYSnLzSOhHcSh/s320/create-protection-group-9.png" width="320" /></a></div><br /><p>Once the protection group is created, discovery of protected workloads starts. During the process Veeam components are pushed on the protected machine. Keep in there is no direct connectivity between Veeam Backup server (VBR) and the cloud machine. More, the cloud machine has only private IP address. All actions are done using Azure APIs and Azure native services.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEih1fQEwNvzuDBEbXRA2XUwK4hWVgmZ43UiXz7RmO2nI6EsGqrQvTvrFOhVOXwBAQXDCTLe-f7fPyqGocAWICaQOTs41kMXZdTHsdtXX_iEDtlPOD5OoYtIES22Gc_Q0mEssNHJsdbJ8xGYI1z6piuBSdem_1D71OEQeaAhdHEzvi6PcS7Q7JkyDCPv/s790/machine-rescan-2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="590" data-original-width="790" height="239" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEih1fQEwNvzuDBEbXRA2XUwK4hWVgmZ43UiXz7RmO2nI6EsGqrQvTvrFOhVOXwBAQXDCTLe-f7fPyqGocAWICaQOTs41kMXZdTHsdtXX_iEDtlPOD5OoYtIES22Gc_Q0mEssNHJsdbJ8xGYI1z6piuBSdem_1D71OEQeaAhdHEzvi6PcS7Q7JkyDCPv/s320/machine-rescan-2.png" width="320" /></a></div><br /><p>First Veeam installs Veeam Cloud Message service on the protected instance. Then it installs Veeam Transport Service and Veeam Agent for for Linux. VBR server uses Cloud Message service and Azure Queue Storage to communicate with service on the protected instance. </p><p>The cloud machine is configured. It's time to create a backup job. Go to <b>Home > Jobs > Backup > Linux computer</b></p><p>We need to use managed by backup server. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh24Gdwm5AwmsGmfEpAeA1SYp3mgfaXm3NTnPV8vjJ-E4vgBCIRs4-VUS3ZB2xWDpLuFwT8WmhH6qP6ThHdAnpN7bKD6qCMyxclge_3Ec82vlcaIH_H7oPV1hnRrkKkO5UBpElda9DfmWIz4odvJD-VhnqFz4-IFux72LqjuDb3zQ8II8KqwQCXPYm4/s794/cloud-agent-backup-job-1.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="794" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh24Gdwm5AwmsGmfEpAeA1SYp3mgfaXm3NTnPV8vjJ-E4vgBCIRs4-VUS3ZB2xWDpLuFwT8WmhH6qP6ThHdAnpN7bKD6qCMyxclge_3Ec82vlcaIH_H7oPV1hnRrkKkO5UBpElda9DfmWIz4odvJD-VhnqFz4-IFux72LqjuDb3zQ8II8KqwQCXPYm4/s320/cloud-agent-backup-job-1.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjy3wRkPMnL2eSlwkT6DunrK3quVRClcwfUkhl1zaAQeTtvT-CcvOS6mviUT7gKQ8DojuyuGCTDzVgQ_QlrYDF5ZbzIGLSBvVvimVTjQWeDGUEOxKrGw9V1Pa75NXVCryqIaVKVM-1aW-wlychhs-QtTc4llkFxaB763mwvX2XyeUkYoylGr3YuEjmy/s794/cloud-agent-backup-job-2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="794" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjy3wRkPMnL2eSlwkT6DunrK3quVRClcwfUkhl1zaAQeTtvT-CcvOS6mviUT7gKQ8DojuyuGCTDzVgQ_QlrYDF5ZbzIGLSBvVvimVTjQWeDGUEOxKrGw9V1Pa75NXVCryqIaVKVM-1aW-wlychhs-QtTc4llkFxaB763mwvX2XyeUkYoylGr3YuEjmy/s320/cloud-agent-backup-job-2.png" width="320" /></a></div><div><br /></div>Select the protection group<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj94lGBG5dXcxwMKT_JE9v-M9VTZmL90bo2w_MObIScwRBu7DHLxfUSol5V4vEVaE3R8Iz8ynYdeYrPyo3JQ9HIlnGDWXhsi9sPb1KaMolXvlSPUVKp-qpoWR3C2ZAXn2_ydsfgguo5fHW3PjXWP5Fng4bCZWep5FdrfgJoXlfJ6H-lBxxRyhyUBdci/s794/cloud-agent-backup-job-3.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="794" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj94lGBG5dXcxwMKT_JE9v-M9VTZmL90bo2w_MObIScwRBu7DHLxfUSol5V4vEVaE3R8Iz8ynYdeYrPyo3JQ9HIlnGDWXhsi9sPb1KaMolXvlSPUVKp-qpoWR3C2ZAXn2_ydsfgguo5fHW3PjXWP5Fng4bCZWep5FdrfgJoXlfJ6H-lBxxRyhyUBdci/s320/cloud-agent-backup-job-3.png" width="320" /></a></div><div><br /></div>Select the backup mode<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgH18DkfpHwpNjKNb96e1SjAe8xFiUpeFZ3hCztLjQ03746r8-f5cpixpqIW-XTqRRhoDCN0ByMtmWBem02Aaq0atQ7Ox9qwkmXkPmMlJtKqeR8XNQEGSs5O4mENGssp44rpH8kBvYotsG0wc9_AnFqFK5lof15hdU_s7afh_JWIIn4FovWCzKWdd9Y/s794/cloud-agent-backup-job-4.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="794" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgH18DkfpHwpNjKNb96e1SjAe8xFiUpeFZ3hCztLjQ03746r8-f5cpixpqIW-XTqRRhoDCN0ByMtmWBem02Aaq0atQ7Ox9qwkmXkPmMlJtKqeR8XNQEGSs5O4mENGssp44rpH8kBvYotsG0wc9_AnFqFK5lof15hdU_s7afh_JWIIn4FovWCzKWdd9Y/s320/cloud-agent-backup-job-4.png" width="320" /></a></div><div><br /></div>Destination repository needs to be object storage<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-39-RasGGy0WVTXgmwJNBQP0pzk4LmRqws8BsigY0Z2Hm7XgKaRFffzw8IW7a_bPHuRukqwixN1RWx-ibTzaCZK9ZUOH920NmqXMhhlPXb9HEu5xOj6Bluzt-KPeoIttlG-pUCkgsUN4yaobV3KSpg53mAFhi9kVEGkH7BVH_gTn5ueXbj3xD4gbM/s794/cloud-agent-backup-job-5.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="794" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-39-RasGGy0WVTXgmwJNBQP0pzk4LmRqws8BsigY0Z2Hm7XgKaRFffzw8IW7a_bPHuRukqwixN1RWx-ibTzaCZK9ZUOH920NmqXMhhlPXb9HEu5xOj6Bluzt-KPeoIttlG-pUCkgsUN4yaobV3KSpg53mAFhi9kVEGkH7BVH_gTn5ueXbj3xD4gbM/s320/cloud-agent-backup-job-5.png" width="320" /></a></div><div><br /></div>We'll enable application aware processing to protect the PostgreSQL instance running on the cloud machine. All the options for a standard Veeam Agent for Linux are available. We could run application aware backups for Oracle, MySQL, pre and post job scripts, pre and post snapshot scripts. We could also enable guest file system indexing.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9bOYvRHGowZZSe-ZYOjgFWQmNPwMkF3NbIYYph0BJoEjR-xpJiFRdNTJayUHE19d8n6yOpr8SvOxkopfDqRljcY-AbuZSaYjO6-ONgwKTh1KT4GiTuGnEczcHnPPxiKQ5KUslAvjFdM_Pi0y1uuVvhcyAN1uOOxUcrByBOkGoRXz8lnjV8QmzEcrw/s794/cloud-agent-backup-job-6-1.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="794" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9bOYvRHGowZZSe-ZYOjgFWQmNPwMkF3NbIYYph0BJoEjR-xpJiFRdNTJayUHE19d8n6yOpr8SvOxkopfDqRljcY-AbuZSaYjO6-ONgwKTh1KT4GiTuGnEczcHnPPxiKQ5KUslAvjFdM_Pi0y1uuVvhcyAN1uOOxUcrByBOkGoRXz8lnjV8QmzEcrw/s320/cloud-agent-backup-job-6-1.png" width="320" /></a></div><div><br /></div>The PostgreSQL instance has been configured to allow users with authentication. Add the user credentials to the agent.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2yEXdMDKFHArsDzLDox1bTsYkwE7hRWwZX5vko2L7IPdS02LfnkSdn5mysh-TIwPZVYLzc87LJEGh7uMpnZqe3_tPv28WTHgz5an_-KyiYiZGmwgGg9lS6giiQLPwtd0lWGyhYS1LeRcgh9PrXT20aqUOPdg5FmUCMKK6hTDggWq9clpRpjJzglx4/s794/cloud-agent-backup-job-6-2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="794" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2yEXdMDKFHArsDzLDox1bTsYkwE7hRWwZX5vko2L7IPdS02LfnkSdn5mysh-TIwPZVYLzc87LJEGh7uMpnZqe3_tPv28WTHgz5an_-KyiYiZGmwgGg9lS6giiQLPwtd0lWGyhYS1LeRcgh9PrXT20aqUOPdg5FmUCMKK6hTDggWq9clpRpjJzglx4/s320/cloud-agent-backup-job-6-2.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJLdfr4pPiNOy6oDKrXL-GZzPNMYb8r63D_lJoGjsQzWoiaEJTL89DbAqLJ692q3w5F7uLX8j174YQdFWEO6UY64MI2QK8LCBPZG_DjY7-qvA1XEg69OqC_VlQiTCeOVacNeGDff8jq_lRLBkKVj_2B645oRYXS1rwPav9qCh0Wbyvlh4D670Lm4uM/s794/cloud-agent-backup-job-6.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="794" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJLdfr4pPiNOy6oDKrXL-GZzPNMYb8r63D_lJoGjsQzWoiaEJTL89DbAqLJ692q3w5F7uLX8j174YQdFWEO6UY64MI2QK8LCBPZG_DjY7-qvA1XEg69OqC_VlQiTCeOVacNeGDff8jq_lRLBkKVj_2B645oRYXS1rwPav9qCh0Wbyvlh4D670Lm4uM/s320/cloud-agent-backup-job-6.png" width="320" /></a></div><div><br /></div>Select the backup schedule and run the job<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsG-00oTxCJK-z_Blp2XZnyFwZl5MZQPkueQxd67s4gpQDJs1ArBQG0Oiekzr6u4PnKC6S_KLexDYmi49JfcnbZwqLwUoLsYMcDwNAhFfTq-BLKwIvN5VW2X5-jb-medle_1dC85MdyV-Yd4AhjEFMUvToW8kPqstBO3e9oa6agI1Lv8SWZGN1XgbQ/s794/cloud-agent-backup-job-7.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="794" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsG-00oTxCJK-z_Blp2XZnyFwZl5MZQPkueQxd67s4gpQDJs1ArBQG0Oiekzr6u4PnKC6S_KLexDYmi49JfcnbZwqLwUoLsYMcDwNAhFfTq-BLKwIvN5VW2X5-jb-medle_1dC85MdyV-Yd4AhjEFMUvToW8kPqstBO3e9oa6agI1Lv8SWZGN1XgbQ/s320/cloud-agent-backup-job-7.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7ZskAR97IgVFpqH0Ps55i4EMNUOhpDoE3LWR5eFJuttOvS18go41dOtCYd-JeyFD8L1ccloYpCnrrtJJulNgvMEn13ny6gQd05zqIiddeZlebTMjQLPyRmPHWYWaxfCDOl3xhhCmM9FgSPsJfx0HIkol7QPjtYb5_ZExETMyQvfcQjkXlwM-t5JhN/s794/cloud-agent-backup-job-8.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="794" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7ZskAR97IgVFpqH0Ps55i4EMNUOhpDoE3LWR5eFJuttOvS18go41dOtCYd-JeyFD8L1ccloYpCnrrtJJulNgvMEn13ny6gQd05zqIiddeZlebTMjQLPyRmPHWYWaxfCDOl3xhhCmM9FgSPsJfx0HIkol7QPjtYb5_ZExETMyQvfcQjkXlwM-t5JhN/s320/cloud-agent-backup-job-8.png" width="320" /></a></div><p><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuKA-yfzwx6gGl7A3Sz8Rq8m219LEk9SEGsjZjO-NhkctaYaBuZ3TCuq_7CupPSAShlkoluNu25VONEvVWSdp8yJJ5MzvimemnsoeS5FLobco9QWKKqli2LMPTdvCzjFNdo8Eh_v9fqbBRWW29PY06spMiw6E-E2LCi0WPqovKSVp6cwKgzs2TeC65/s982/backup-job-statistics-2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="723" data-original-width="982" height="236" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuKA-yfzwx6gGl7A3Sz8Rq8m219LEk9SEGsjZjO-NhkctaYaBuZ3TCuq_7CupPSAShlkoluNu25VONEvVWSdp8yJJ5MzvimemnsoeS5FLobco9QWKKqli2LMPTdvCzjFNdo8Eh_v9fqbBRWW29PY06spMiw6E-E2LCi0WPqovKSVp6cwKgzs2TeC65/s320/backup-job-statistics-2.png" width="320" /></a></div><br /><p>After the backup is completed we look at restore options. We can now restore our cloud machine on premises using Instant recovery. We can also restore it to another cloud. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk_3Ak8Y5qdVQ0u3SmFwzEOYOZFyrxA1KvsYoVXSfcjKQBk6cpYfHvNQ-Mp7oZQhS1Ug3Tb_bz16h774pQNFaMIRSoZcXRJsIMpcDmHNKOFTWCgKUZwEYsQdOT1Pqwx27oeaJQks00e-APAycF3JEGrObz8_w-uzykJ3zykuJUVZbSQ6YR4FOFkSCP/s585/restore-cloud-vm-2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="449" data-original-width="585" height="246" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk_3Ak8Y5qdVQ0u3SmFwzEOYOZFyrxA1KvsYoVXSfcjKQBk6cpYfHvNQ-Mp7oZQhS1Ug3Tb_bz16h774pQNFaMIRSoZcXRJsIMpcDmHNKOFTWCgKUZwEYsQdOT1Pqwx27oeaJQks00e-APAycF3JEGrObz8_w-uzykJ3zykuJUVZbSQ6YR4FOFkSCP/s320/restore-cloud-vm-2.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><p style="text-align: left;">We have access to Veeam Explorer for PostgreSQL and we can restore the instance to another server, we can publish the instance to another server or restore the latest state to the protected VM. </p></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_Kiq6F645e28-aPMnyJZ3I08qKnZ64_T5eBFx1WGbHq963_IGkwm0jctTHhpMsfY69d9jYxgqfR3FMMOzxS1nq1DBs-KmWGuNmQRx9JEZxu1wSHvCKDcbrWcJwIXXJC024TQP-79oiTPm48UX0KaHg9aq-1tm5zggxRHaaheJ6mXr0D7RbUOB6V7F/s342/restore-pgsql-3.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="342" data-original-width="253" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_Kiq6F645e28-aPMnyJZ3I08qKnZ64_T5eBFx1WGbHq963_IGkwm0jctTHhpMsfY69d9jYxgqfR3FMMOzxS1nq1DBs-KmWGuNmQRx9JEZxu1wSHvCKDcbrWcJwIXXJC024TQP-79oiTPm48UX0KaHg9aq-1tm5zggxRHaaheJ6mXr0D7RbUOB6V7F/s320/restore-pgsql-3.png" width="237" /></a></div><br /><div>To implement 3-2-1 we can create a backup copy job and get a copy of the backups to another repository on premises or in another cloud service provider. </div><div><br /></div><div>In this post we have looked at the new Veeam cloud integrated agents, what are their advantages and we have learned how easy it is to configure them. </div></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-6209290490351740662023-04-23T20:47:00.015+03:002023-04-24T07:47:41.247+03:00A Quick Look At Terraform Provider for Ansible<p><a href="https://registry.terraform.io/providers/ansible/ansible/latest/docs" target="_blank">Terraform Provider for Ansible v1.0.0</a> has been release recently and while reading a couple of articles about it I actually wanted to see how it work end to end. </p><p>We're going to look in this article at an use case where we provision cloud infrastructure with Terraform and then use Ansible to configure that infrastructure.</p><p><b>To be more specific, in our scenario we are looking at achieving the following</b></p><p>1. use Terraform to deploy an infrastructure in Google Cloud: VPC, VM instance with an external IP address and firewall rule to allow access to the instance </p><p>2. automatically and transparently update Ansible inventory file </p><p>3. automatically configure the newly provisioned VM instance with Ansible </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXkSdD4MKtCZax1Mp5lkW6D2SxfqfgAnoxy5LPsATh0Umn3TDmQ-rBln1Hhh-gA6gaybsqzmYu4EGTGGoEIOSMQ4h8By22NY7WFzTRdhbuWChAhKQO_hGvKVgp5_iLbpQ0MpXRqOPhvjaSsWn-coJV8K9qGplfp9JZZ3XxDsmo_c__hFXSJ0uNHZrE/s1491/terraform-ansible-google-cloud.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="810" data-original-width="1491" height="217" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXkSdD4MKtCZax1Mp5lkW6D2SxfqfgAnoxy5LPsATh0Umn3TDmQ-rBln1Hhh-gA6gaybsqzmYu4EGTGGoEIOSMQ4h8By22NY7WFzTRdhbuWChAhKQO_hGvKVgp5_iLbpQ0MpXRqOPhvjaSsWn-coJV8K9qGplfp9JZZ3XxDsmo_c__hFXSJ0uNHZrE/w400-h217/terraform-ansible-google-cloud.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><p>We use Terraform provider for Ansible and <a href="https://github.com/ansible-collections/cloud.terraform" target="_blank">Ansible Terraform collection</a>. From the collection we will be using the inventory plugin. Everything is run from a management machine installed with Terraform, Ansible and the Ansible collection (for installation please see the GitHub project linked above).</p><p>We will orchestrate everything from Terraform. We'll use Ansible provider to place the newly created VM instance to a specific Ansible group called "nginx_hosts" and execute Ansible commands to update the inventory and run the playbook that installs nginx. </p><p>For simplicity we use a flat structure with a single Terraform configuration file, an Ansible inventory file and an Ansible playbook. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUkXMAwLF9d3f2fjfHaUHqqQodGzAk6XJ3ItcIl34QrMaD67DPDTiOcqUYudZAwS_SyGMDqGRJIipGyeU0PO94M5q5-AryIjhv82ydyZHgvAoFY2CD0guLTvwWYw2GnW4mrveZ7LcR4bF7YiT_b6QJm-1Z9gaKaiuXJRPKndJSK6DchU2JZBK1Sm_E/s230/project-structure.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="102" data-original-width="230" height="89" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUkXMAwLF9d3f2fjfHaUHqqQodGzAk6XJ3ItcIl34QrMaD67DPDTiOcqUYudZAwS_SyGMDqGRJIipGyeU0PO94M5q5-AryIjhv82ydyZHgvAoFY2CD0guLTvwWYw2GnW4mrveZ7LcR4bF7YiT_b6QJm-1Z9gaKaiuXJRPKndJSK6DchU2JZBK1Sm_E/w200-h89/project-structure.png" width="200" /></a></div><br /><p>We start by looking at the Ansible files.</p><p><span style="font-family: courier;">inventory.yml</span> contains only one line that references the collection inventory plugin:</p><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">plugin: cloud.terraform.terraform_provider
</pre></div>
<p></p><p>This way we make sure the inventory file is actually created dynamically based on the Terraform state file. </p><p><span style="font-family: courier;">nginx_install.yml</span> is the playbook that installs nginx on the VM instance. It's a very simple playbook that checks the latest version is installed and that it is started. We will be using Ubuntu for our Linux distribution. </p><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: #0e84b5; font-weight: bold;">---</span>
- hosts: nginx_hosts
tasks:
- name: ensure nginx is at the latest version
apt: name=nginx state=latest update_cache=true
become: true
- name: start nginx
service:
name: nginx
state: started
</pre></div>
<p></p><p>Based on the code written so far, if we add any host to the group named "nginx_hosts" running the playbook will ensure latest version of nginx is installed. We have no knowledge of IP addresses or the hostnames of those hosts. We actually have no idea if there are any hosts in the group. </p><p><br /></p><p>The Ansible hosts that we want to configure are created using Terraform. For simplicity there is only one flat file - main.tf file. We start by defining the Ansible provider.</p><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">terraform {
required_providers {
ansible = {
source = "ansible/ansible"
version = "1.0.0"
}
}
}
</pre></div>
<p></p><p>Next we define the variables. We are using Google Cloud provider and we need some variables to configure it and deploy the resources. We are using a user_id to generate unique resource name for each deployment. We add GCP provider variables (region, AZ, project) and variables for the network.</p><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">variable "user_id" {
type = string
description = "unique id used to create resources"
default = "tfansible001"
}
variable "gcp_region" {
type = string
description = "Google Cloud region where to deploy the resources"
default = "europe-west4"
}
variable "gcp_zone" {
type = string
description = "Google Cloud availability zone where to deploy resources"
default = "europe-west4-a"
}
variable "gcp_project" {
type = string
description = "Google Cloud project name where to deploy resources"
default = "your-project"
}
variable "networks" {
description = "list of VPC names and subnets"
type = map(any)
default = {
web = "192.168.0.1/24"
}
}
variable "fwl_allowed_tcp_ports" {
type = map(any)
description = "list of firewall ports to open for each VPC"
default = {
web = ["22", "80", "443"]
}
}
</pre></div>
<p></p><p>We need also variables for Ansible provider resources: ansible user that can connect and configure the instance, the path the the ssh key file and the path to python executable. In case you use just want to test this, you can use your Google Cloud user. </p><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">variable "ansible_user" {
type = string
description = "Ansible user used to connect to the instance"
default = "ansible_user"
}
variable "ansible_ssh_key" {
type = string
description = "ssh key file to use for ansible_user"
default = "path_to_ssh_key_for_ansible_user"
}
variable "ansible_python" {
type = string
description = "path to python executable"
default = "/usr/bin/python3"
}
</pre></div>
<p></p><p>Then we configure the Google Cloud provider. Note that in Terraform it is not mandatory to define a provider with requried_provider block. Also note that for Ansible provider there is no configuration. </p><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">provider "google" {
region = var.gcp_region
zone = var.gcp_zone
project = var.gcp_project
}
</pre></div>
<p></p><p><b><br /></b></p><p>Time to create the resources. We start with the VPC, subnet and firewall rules. The code iterates through the map object defined in variables section:</p><p>
<!--HTML generated using hilite.me--></p><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">resource "google_compute_network" "main" {
for_each = var.networks
name = "vpc-${each.key}-${var.user_id}"
auto_create_subnetworks = "false"
}
resource "google_compute_subnetwork" "main" {
for_each = var.networks
name = "subnet-${each.key}-${var.user_id}"
ip_cidr_range = each.value
network = google_compute_network.main[each.key].id
private_ip_google_access = "true"
}
resource "google_compute_firewall" "allow" {
for_each = var.fwl_allowed_tcp_ports
name = "allow-${each.key}"
network = google_compute_network.main[each.key].name
allow {
protocol = "tcp"
ports = each.value
}
source_ranges = ["0.0.0.0/0"]
depends_on = [
google_compute_network.main
]
}
</pre></div>
<br /><p></p><p>Then we deploy the VM instance and we inject the ssh key using VM metadata. Again, ansible_user could be your Google user if you are using this for a quick test.</p><p>
<!--HTML generated using hilite.me--></p><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">resource "google_compute_instance" "web" {
name = "web-vm-${var.user_id}"
machine_type = "e2-medium"
boot_disk {
initialize_params {
image = "projects/ubuntu-os-cloud/global/images/ubuntu-2210-kinetic-amd64-v20230125"
}
}
network_interface {
network = google_compute_network.main["web"].self_link
subnetwork = google_compute_subnetwork.main["web"].self_link
access_config {}
}
metadata = {
"ssh-keys" = <<EOT
ansible_user:ssh-rsa AAAAB3NzaC1y...
EOT
}
}
</pre></div>
<div><br /></div><br /><p></p><p>So far we have the infrastructure deployed. We now need to configure the VM instance. We will configure a resource of type <span style="font-family: courier;">ansible_host</span>. The resource will be used to dynamically update the Ansible inventory. </p><p>
<!--HTML generated using hilite.me--></p><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">resource "time_sleep" "wait_20_seconds" {
depends_on = [google_compute_instance.web]
create_duration = "20s"
}
resource "ansible_host" "gcp_instance" {
name = google_compute_instance.web.network_interface.0.access_config.0.nat_ip
groups = ["nginx_hosts"]
variables = {
ansible_user = "${var.ansible_user}",
ansible_ssh_private_key_file = "${var.ansible_ssh_key}",
ansible_python_interpreter = "${var.ansible_python}"
}
depends_on = [time_sleep.wait_20_seconds]
}
</pre></div>
<br /><p></p><div>We've added a sleep time to make sure the VM instance is powered on and services are running. Please note that we add the public IP of the VM instance, whatever that is, as the host name in Ansible. The host is added to "nginx_hosts" group. We also let Ansible know what user, ssh key and python version to use. </div><div><br /></div><div>Last thing to do is to update Ansible inventory and run the playbook. We will use terraform_data resources to execute Ansible command line.</div><div><br /></div><div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">resource "terraform_data" "ansible_inventory" {
provisioner "local-exec" {
command = "ansible-inventory -i inventory.yml --graph --vars"
}
depends_on = [ansible_host.gcp_instance]
}
resource "terraform_data" "ansible_playbook" {
provisioner "local-exec" {
command = "ansible-playbook -i inventory.yml nginx_install.yml"
}
depends_on = [terraform_data.ansible_inventory]
}
</pre></div>
<br /></div><div>And that's it. Once you update the code above with your information and run <span style="font-family: courier;">terraform apply</span><span style="font-family: inherit;">, it will automatically deploy a Google Cloud VM instance and configure it with Ansible. All transparent and dynamic, all driven from Terraform. </span></div><div><br /></div><div>In this article you've seen how to use Terraform to deploy a cloud VM instance and automatically and transparently configure it with Ansible.</div><div><br /></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com3tag:blogger.com,1999:blog-555576322574773333.post-26066511783735853672023-04-17T08:43:00.001+03:002023-04-17T08:43:29.008+03:00Moving Backups to Hardened Linux Repositories<p>It's not enough to have a backup of your data. You need to make sure that you will be able to recover from that backup when the time comes. And one of the best ways to make sure you can do it, is to make protect your backups from being modified intentionally or unintentionally. </p><p>In Veeam Backup & Replication, a hardened repository is using a Linux server to provide immutability for your backups. The feature was first released in version 11. Let's see what makes the hardened repository special, how it protects your backups from changes and how easy is to actually start using it </p><p><b><br /></b></p><p><b>Immutable file attribute</b></p><p>Linux file system allows setting special attributes to its files. One of these attributes is immutable attribute. As long as it is set on a file, that file cannot be modified by any user, not even root. More, root user is the only user that can actually set and unset the immutable attribute on a specific file. You can do it using <span style="font-family: courier;">lsattr</span> and <span style="font-family: courier;">chattr </span>commands in Linux as seen in the below screenshot:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhe7_AdtOxjZ2XSdgCV278XMXk6bT6dl72ef-rLPfFtJeAOY1ypXUheBaeUHfj76z17CjF_JzmRgejizUmhoOs7-OxKbcJ9ftM54tANiKzd1zll1vrNoWsq1R1ZC6GSmEgXteLO2Hk9N-KUdvsWTxgXTL0pu8-LdrAmBf2xDD9XeOFd4ChU9zdO1NRe/s579/immutable-file-attribute.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="146" data-original-width="579" height="81" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhe7_AdtOxjZ2XSdgCV278XMXk6bT6dl72ef-rLPfFtJeAOY1ypXUheBaeUHfj76z17CjF_JzmRgejizUmhoOs7-OxKbcJ9ftM54tANiKzd1zll1vrNoWsq1R1ZC6GSmEgXteLO2Hk9N-KUdvsWTxgXTL0pu8-LdrAmBf2xDD9XeOFd4ChU9zdO1NRe/s320/immutable-file-attribute.png" width="320" /></a></div><p>Veeam hardened repo uses exactly the same mechanism of making backup files immutable. </p><p><b><br /></b></p><p><b>Isolate Linux processes </b></p><p>To run a successful repository, Veeam needs several functionalities: to receive data from proxies, to open and close firewall ports, to set and unset immutability as per retention policy. In order to harden the repository, Veeam implements these functionalities as separate Linux processes.</p><p>The processes that sets and unsets immutable attribute on the backup files is called <span style="font-family: courier;">veeamimmureposvc </span><span style="font-family: inherit;">and it </span>needs to run with root privileges, as root user is the only user that can modify immutable attribute.</p><p><span style="font-family: courier;">veeamtransport --run-service </span><span style="font-family: inherit;">is the Data Mover service performing data receiving, processing and storing</span><span style="font-family: inherit;">. Because it is a service exposed on the network, </span>it is running under a standard Linux user. In case of a breach, the service will give access only to a standard user with limited privileges. The Linux user under which this service runs must not be allowed to elevate its privileges. </p><p>A third process takes care of dynamically opening and closing firewall ports: <span style="font-family: courier;">veeamtransport --run-environmentsvc</span> and this one is also running with elevated privileges. </p><p>The following screen shot shows the three main services that are part of a hardened repository. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsGe1GSf1WOL3WLkLZQKjd-q8Hv_JHvUSqOoqpURnJmhLE6DQ5tAVK_EGROGr2v5iDjxb2IMnnetNMvK2UcP95UVst_VR6WpdtWVJTWj0eUrMv51_qSDt9QnK4cQtiH6DmqC5hnARA3Jz3RnLKkqwbNH4pv7-0OTTPyrvwI0Z6UdRKj0Rf0DWmAQcU/s1584/repo-processes.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="73" data-original-width="1584" height="15" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsGe1GSf1WOL3WLkLZQKjd-q8Hv_JHvUSqOoqpURnJmhLE6DQ5tAVK_EGROGr2v5iDjxb2IMnnetNMvK2UcP95UVst_VR6WpdtWVJTWj0eUrMv51_qSDt9QnK4cQtiH6DmqC5hnARA3Jz3RnLKkqwbNH4pv7-0OTTPyrvwI0Z6UdRKj0Rf0DWmAQcU/s320/repo-processes.png" width="320" /></a></div><br /><p><b>Single use credentials</b></p><p>Another layer of protection is added through the way the credentials are handled within the backup server.</p><p>To add the Linux repo to the backup server you need to specify Linux credentials. These credentials are only used during the initial configuration process and they are never stored in backup server's credential manager. Temporary privilege elevation may be needed during the repository configuration for deployment and installation of Veeam processes. After the configuration process finishes, all elevated privileges must be revoked from the user. </p><p> </p><p><b>Additional repository features - fast clone</b></p><p>This one is not a security related feature, but it comes in as a great add on to the hardened repository.</p><p>In case you formatted your file system with XFS file system and you have a supported Linux distribution (see <a href="https://helpcenter.veeam.com/docs/backup/vsphere/backup_repository_block_cloning.html?ver=120" target="_blank">this user guide page for more details</a>), Veeam will use fast clone to reduce used disk space on the repository and increase the speed of synthetic backups and transformations. Fast clone works by referencing existing data blocks on the repository instead of copying the data blocks between files. </p><p><br /></p><p><b>Using the hardened repository</b></p><p>For new backup jobs, just point them to your hardened repository. In case you have existing backups then you need to migrate those to your new repo. With v12 comes a new feature that allows to move any backup from an existing repository to another one. Simply select your backup, right click it and you will see that now you can "move backup"</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNfbOF1gdGoB_6ycnAuGO1A-ucSVXt-UOuRYvz002MwMVtE5eAfXbiQ3iY4Zu22KhLhPKF9vT2AXthmjZIhQ1ygeGRbr3CMoM1HKmDPjShMzkC5CeNPJQXlgjDazPqgu0ZfrGV84iH3JeUdaRQlUqnm9MSab2bFF0ZnTTrx3EdkvHooop8wa3wKO-b/s643/vbr-move-backup.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="127" data-original-width="643" height="63" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNfbOF1gdGoB_6ycnAuGO1A-ucSVXt-UOuRYvz002MwMVtE5eAfXbiQ3iY4Zu22KhLhPKF9vT2AXthmjZIhQ1ygeGRbr3CMoM1HKmDPjShMzkC5CeNPJQXlgjDazPqgu0ZfrGV84iH3JeUdaRQlUqnm9MSab2bFF0ZnTTrx3EdkvHooop8wa3wKO-b/s320/vbr-move-backup.png" width="320" /></a></div><br /><p>Let's look at moving backups from a Windows NTFS repository to our hardened Linux repo. We start with an empty repository configured with a service account called <span style="font-family: courier;">veeambackup</span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh669GPfx8VaP7JSkvcTjTyNubUl9Ou-gWEzRtdmSkbRjKuz4QMUcTcFLZV147SLQug9ooHhG8bV1MWW1xoraI4uXAOva_PuwXv1Zr-KK-U6W2VowwACKHSiy-6EvRqff009PhNU8muO8BmDy8UFxf5AX_AxtFxAViQdvqh2RcoXQKg8IjeyN0i__lG/s579/empty-linux-hardened-repo.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="72" data-original-width="579" height="40" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh669GPfx8VaP7JSkvcTjTyNubUl9Ou-gWEzRtdmSkbRjKuz4QMUcTcFLZV147SLQug9ooHhG8bV1MWW1xoraI4uXAOva_PuwXv1Zr-KK-U6W2VowwACKHSiy-6EvRqff009PhNU8muO8BmDy8UFxf5AX_AxtFxAViQdvqh2RcoXQKg8IjeyN0i__lG/s320/empty-linux-hardened-repo.png" width="320" /></a></div><br /><p>The first backup chain is for an unencrypted backup job. The backup job is configured to use a standard Windows repository. There are 2 full restore points in the backup chain. Each restore point is 960 MB and the total size on disk is 1.87 GB. We use "move backup" to send the the backup chain to the hardened repository:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYtjJX8BpH9GOJO_LwUFcYLR5LTnVvvDIcYAeWfolTm_79T9WBfj5W77gM9MyCaY3G5aawnd_-gsar6RT7fR5F0Vz84EUrTcx9iid9R7OrvUnKD4LuOm_uA9pisTjtUcepqBgpU225LtPqCqxn581QC77_x8J6QoTzO6y3Q5G4oZ4MhOzzX-WGK22M/s586/vbr-move-backup-select-repo.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="349" data-original-width="586" height="191" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYtjJX8BpH9GOJO_LwUFcYLR5LTnVvvDIcYAeWfolTm_79T9WBfj5W77gM9MyCaY3G5aawnd_-gsar6RT7fR5F0Vz84EUrTcx9iid9R7OrvUnKD4LuOm_uA9pisTjtUcepqBgpU225LtPqCqxn581QC77_x8J6QoTzO6y3Q5G4oZ4MhOzzX-WGK22M/s320/vbr-move-backup-select-repo.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgM2pUG5ZganQRUDWd0csiiFLTSyUxABxkVhU81GOlif11R_0GSnveafnyww_cRrac5CBmeH1MEonyTNTTU_v-6pl7WLaGgM5TJ1ZQG14ylPKQCRQSWaQuD0S_7F3TxtO3bq5Q0oKBepahDilJki3mzu1jgXy-9umiAcd84mYVXhw6DedRqNobIAaLV/s790/vbr-move-backup-job-finished.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="590" data-original-width="790" height="239" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgM2pUG5ZganQRUDWd0csiiFLTSyUxABxkVhU81GOlif11R_0GSnveafnyww_cRrac5CBmeH1MEonyTNTTU_v-6pl7WLaGgM5TJ1ZQG14ylPKQCRQSWaQuD0S_7F3TxtO3bq5Q0oKBepahDilJki3mzu1jgXy-9umiAcd84mYVXhw6DedRqNobIAaLV/s320/vbr-move-backup-job-finished.png" width="320" /></a></div><div><br /></div>Once the move processes finished, the backup job has been updated to point to the new repository. Let's check what happened on the Linux hardened repo. <div><br /></div><div>Find the backup chain in our repo:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqvJKu_KJJ4hPhWY1Fa2kioY0ViRBE5keStX2hqCkfeHiRZE352c0qj4GNdazjhunCNeiu-JbdcAqWZVz7fl6Jgyrnv8Vwv0-hHILhj4ubUe_TYC_C4RNJBOayqiHDuthHFzieDjF_Oz8lXFMGbhqvABgdSNpc90RfLvBxj4kcfYDXEymNB4QcHVHQ/s1037/hardened-repo-backup-restore-points.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="95" data-original-width="1037" height="29" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqvJKu_KJJ4hPhWY1Fa2kioY0ViRBE5keStX2hqCkfeHiRZE352c0qj4GNdazjhunCNeiu-JbdcAqWZVz7fl6Jgyrnv8Vwv0-hHILhj4ubUe_TYC_C4RNJBOayqiHDuthHFzieDjF_Oz8lXFMGbhqvABgdSNpc90RfLvBxj4kcfYDXEymNB4QcHVHQ/s320/hardened-repo-backup-restore-points.png" width="320" /></a></div><div><br /></div>Check the immutability flag:<div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJdcGEner-2ka38dYzdz0w34RlN8pmqU6YfbWoFf1zI31-fkcuGDhZWZ1I0tygewImV8V6xxVYILvey3wtPRaRVxdKUjORg00JDdWrzfdKhQh1iFZSRlhv9_OscYmJKA0SosDQ1aplWWojxzWqtOltawmICWQa6Nf770VDvwOAPzhGqGijThGp0rH4/s967/hardened-repo-backup-restore-points-immu-flag.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="75" data-original-width="967" height="25" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJdcGEner-2ka38dYzdz0w34RlN8pmqU6YfbWoFf1zI31-fkcuGDhZWZ1I0tygewImV8V6xxVYILvey3wtPRaRVxdKUjORg00JDdWrzfdKhQh1iFZSRlhv9_OscYmJKA0SosDQ1aplWWojxzWqtOltawmICWQa6Nf770VDvwOAPzhGqGijThGp0rH4/s320/hardened-repo-backup-restore-points-immu-flag.png" width="320" /></a></div><div><br /></div>The restore points are set as immutable. The metadata file is not since this file is modified during each backup operation. Trying to delete any of the restore points will fail:<br /> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtkKMV1kk3N66a_VoCrRTxQVLskCg3FP1lPSZSsL52ywX0fN8expEVLss0_4O2872oNzO4QCY96REeU5uebI4tKNLjZvJusmBM2qK3buZR6rjuhO2zNtadQfW_ey5Yh3zN71vg2GoqBUMcvatc8tSyB7NkrToY2Yx--eLDnMQX2qCFcad3UEIq33W9/s1221/hardened-repo-delete-restore-point.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="41" data-original-width="1221" height="11" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtkKMV1kk3N66a_VoCrRTxQVLskCg3FP1lPSZSsL52ywX0fN8expEVLss0_4O2872oNzO4QCY96REeU5uebI4tKNLjZvJusmBM2qK3buZR6rjuhO2zNtadQfW_ey5Yh3zN71vg2GoqBUMcvatc8tSyB7NkrToY2Yx--eLDnMQX2qCFcad3UEIq33W9/s320/hardened-repo-delete-restore-point.png" width="320" /></a></div><div><br /><div>We can also check that XFS fast clone is working by looking at the used space on the repo which is less that the sum of the 2 full backups:</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzxQGTHx8fiEd0mk0xw7UuTaAeZ2sKcCBUPVMnAm_0nKjC0Ds4zaljW_sJ9IDfCpOnf6ZpRT9lXbT-n_4cB66P0Nte9J0n5RLGTQNveT4Clif7Lq9VD7ISEi57Enhyit8neUI5t0MmJPM8CcYquBzaa0brr8kFGxCqYv8HgtRij_Oq2UDw3vvFUGOI/s614/hardened-repo-used-space.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="62" data-original-width="614" height="32" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzxQGTHx8fiEd0mk0xw7UuTaAeZ2sKcCBUPVMnAm_0nKjC0Ds4zaljW_sJ9IDfCpOnf6ZpRT9lXbT-n_4cB66P0Nte9J0n5RLGTQNveT4Clif7Lq9VD7ISEi57Enhyit8neUI5t0MmJPM8CcYquBzaa0brr8kFGxCqYv8HgtRij_Oq2UDw3vvFUGOI/s320/hardened-repo-used-space.png" width="320" /></a></div><br /><p>In this post we've looked at the features of hardened repository and how they work. To implement a hardened repository in your environment follow the steps in the <a href="https://helpcenter.veeam.com/docs/backup/vsphere/hardened_repository.html?ver=120" target="_blank">user guide </a> </p></div></div></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-53930585313237779942023-03-11T15:35:00.003+02:002023-03-11T15:35:17.852+02:00Clear DNS Cache on VCSA after ESXi IP Address Update<p>I've recently had to do some changes and modify the ESXi management IP address. Once the ESXi host has been put to maintenance mode and removed from vCenter Server inventory I've updated the DNS server records: A and PTR records. Checking DNS resolution works, I've tried to re-add servers to vCenter Server using their FQDN, but it errored with no route to host. This is because of the DNS client cache on VCSA. </p><p>To solve it fast and not wait too long you need to ssh to VCSA appliance and run the following commands: </p><p><span style="font-family: courier;">systemctl restart dnsmasq</span></p><p><span style="font-family: courier;">systemctl restart systemd-resolved</span></p><p>Once the services are restarted you can add again the ESXi hosts using the FQDN.</p>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-20555899623456629552023-02-22T10:00:00.001+02:002023-02-22T10:00:00.154+02:00 A Look at Veeam Backup & Replication v12 NAS Backup - Creating file share backup job<p>In the previous post "A Look at Veeam Backup & Replication v12 NAS Backup - Initial Configuration" we discussed about NAS backup architecture and the initial configuration needed to setup infrastructure. It is time to continue with the creation of the backup job and its options</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihTj6j5X5NCP5Wh85-f25gZa6qQzOEfYDCL7dL_Q4xqcyypXvRsZnAAwknezRRBlMbjoJVIAC6sKREmVuczIDiFZd_0j7IJ71KneDusWqNPnSe2HLJS4y2I_q8k2-ihdd2O-oi85BVxGWTvHnLlqZodh0U_nc0j7IIIk_yEyeEgOsHGw9uhAX49SEu/s441/config_nas_backup_job.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="192" data-original-width="441" height="139" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihTj6j5X5NCP5Wh85-f25gZa6qQzOEfYDCL7dL_Q4xqcyypXvRsZnAAwknezRRBlMbjoJVIAC6sKREmVuczIDiFZd_0j7IJ71KneDusWqNPnSe2HLJS4y2I_q8k2-ihdd2O-oi85BVxGWTvHnLlqZodh0U_nc0j7IIIk_yEyeEgOsHGw9uhAX49SEu/s320/config_nas_backup_job.png" width="320" /></a></div><div><br /></div>Give the backup job a name<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhz1zpBmcGiHa5UkuwIZL2_eG92onL3JVMhdH30pzudTFwlF7rWlFZwo1uuE6sWmBZsY51g6AxO7-DqdIjYMFLpIS5yG-MqodHOi8yx2H4B0J7hElw2BOxd8bZsjEr3l-TD9EgRlX3YQLdw2B7EB5UmOAiO78lr47YR3Kn3zHpGqwfVuGMYuXA6JVYi/s1008/config_nas_backup_job_1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhz1zpBmcGiHa5UkuwIZL2_eG92onL3JVMhdH30pzudTFwlF7rWlFZwo1uuE6sWmBZsY51g6AxO7-DqdIjYMFLpIS5yG-MqodHOi8yx2H4B0J7hElw2BOxd8bZsjEr3l-TD9EgRlX3YQLdw2B7EB5UmOAiO78lr47YR3Kn3zHpGqwfVuGMYuXA6JVYi/s320/config_nas_backup_job_1.png" width="320" /></a></div><div><br /></div>Select the shares to protect<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcm5trktGZ0__l6N0UlYTej7EJTP1vGK_8PLaP9QWU5zFqn3xTQKJ34VXKlvCMUotXsS9vsFX72rrpvL6ILtY-D2kM4MMMVmYjeHjOh2WihqF1YgHkllaz7f6NCT4HW-HNsCPx7vL2Ykirx30rw4-a1UUBTh2fKnGvh4kCDRNXz26YslAiRq2MNOzs/s1008/config_nas_backup_job_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcm5trktGZ0__l6N0UlYTej7EJTP1vGK_8PLaP9QWU5zFqn3xTQKJ34VXKlvCMUotXsS9vsFX72rrpvL6ILtY-D2kM4MMMVmYjeHjOh2WihqF1YgHkllaz7f6NCT4HW-HNsCPx7vL2Ykirx30rw4-a1UUBTh2fKnGvh4kCDRNXz26YslAiRq2MNOzs/s320/config_nas_backup_job_2.png" width="320" /></a></div><div><br /></div>If you want to exclude files from being backed up, go to Advanced<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfxJZLy6qcV1rLq2KRGKUStWwH0v0Xus7O3mWLFhIS1ApZB8Vr4ITpGqNsy2sa-D1z-sMmE_ezcHDGiYm1CrKGQPTer_WT4Fb_A9KdKUApYvWCKEt9HTk6oHT_7KOqwVDkoMDLQ706QSZfH4ADr4UFzV2BUWXT8J5t-KOwrSifSv2yeR_nn_sll6Cc/s1008/config_nas_backup_job_3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfxJZLy6qcV1rLq2KRGKUStWwH0v0Xus7O3mWLFhIS1ApZB8Vr4ITpGqNsy2sa-D1z-sMmE_ezcHDGiYm1CrKGQPTer_WT4Fb_A9KdKUApYvWCKEt9HTk6oHT_7KOqwVDkoMDLQ706QSZfH4ADr4UFzV2BUWXT8J5t-KOwrSifSv2yeR_nn_sll6Cc/s320/config_nas_backup_job_3.png" width="320" /></a></div><div><br /></div>Select the repository (we use here S3 compatible on premises minio)<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGQ-bDwa6kEzsE_4YBMhFhiOsKDkqoQrmuata83S7FXWnk2zciORsL6zUTPPGAge1P-FUoBJ2xeaEO-1_53E-WzpUpvKliR8m4CIWWN6XSXqPKEemOJsxKfzZ58bx-Z-XxQuHCoZHNjd8IazprJ2ZaqviaFs7tl97_6hJbdfQSmXtSccV8F24NzLr5/s1008/config_nas_backup_job_4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGQ-bDwa6kEzsE_4YBMhFhiOsKDkqoQrmuata83S7FXWnk2zciORsL6zUTPPGAge1P-FUoBJ2xeaEO-1_53E-WzpUpvKliR8m4CIWWN6XSXqPKEemOJsxKfzZ58bx-Z-XxQuHCoZHNjd8IazprJ2ZaqviaFs7tl97_6hJbdfQSmXtSccV8F24NzLr5/s320/config_nas_backup_job_4.png" width="320" /></a></div><div><br /></div>Advanced settings for the backup job will let you specify how many versions to protect<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUOouRRjWzCDhExzmdMUZ6mOvlnD5bPggTTwndYoPT9aP9CjvilYybRfnnUmefaQjt6Tagg5C04ApOiHGsVJ_zVZkZramHXljoohsKebhEyNSI85gDz-Sywd96onxXpZ0y67Za8kvc6xekrWHCwHQ_agE61vQfYuBbee-IDKAOJW1UB_uMH2JI26Ga/s1008/config_nas_backup_job_5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUOouRRjWzCDhExzmdMUZ6mOvlnD5bPggTTwndYoPT9aP9CjvilYybRfnnUmefaQjt6Tagg5C04ApOiHGsVJ_zVZkZramHXljoohsKebhEyNSI85gDz-Sywd96onxXpZ0y67Za8kvc6xekrWHCwHQ_agE61vQfYuBbee-IDKAOJW1UB_uMH2JI26Ga/s320/config_nas_backup_job_5.png" width="320" /></a></div><div><br /></div>Specify how to process ACLs for files and folders<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAVzi-5dcsONvjWPKrZhsMbWRA57GQxwwrRXEhzXTR6_G0mdItaNlxx4s-9Y3ofJqexZZNBLNrEStaQiXWwAvifyzLuGX-lG1xJNCehHNlB4wNw5UPX50_cIZo-OyRTkqJOISoHxeiwcjTgFFJt9R0rjVzcEVJ_UYpGiRvM_y-07JdaWKGVhG3ARQX/s1008/config_nas_backup_job_6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAVzi-5dcsONvjWPKrZhsMbWRA57GQxwwrRXEhzXTR6_G0mdItaNlxx4s-9Y3ofJqexZZNBLNrEStaQiXWwAvifyzLuGX-lG1xJNCehHNlB4wNw5UPX50_cIZo-OyRTkqJOISoHxeiwcjTgFFJt9R0rjVzcEVJ_UYpGiRvM_y-07JdaWKGVhG3ARQX/s320/config_nas_backup_job_6.png" width="320" /></a></div><div><br /></div>Define compression and encryption <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmnVLjqbCir-d5P2uRUgqakJx8r0GnMLC_2RAbcirXi_5TTDOP3ylqN9Q5wE1rCJoW1yN14_1qGNSFF5iPC5aAl-w0YD0tYnhNZGPyq5eMjF43Xte8_NvFpNj60sMO6JG8v3bzFUv1QcRqSinUmRiv12I2OVC9bPoDn3n_DR4UN9vpRrBBMRT2dEdt/s1008/config_nas_backup_job_7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmnVLjqbCir-d5P2uRUgqakJx8r0GnMLC_2RAbcirXi_5TTDOP3ylqN9Q5wE1rCJoW1yN14_1qGNSFF5iPC5aAl-w0YD0tYnhNZGPyq5eMjF43Xte8_NvFpNj60sMO6JG8v3bzFUv1QcRqSinUmRiv12I2OVC9bPoDn3n_DR4UN9vpRrBBMRT2dEdt/s320/config_nas_backup_job_7.png" width="320" /></a></div><div><br /></div>If you want to plan periodic backup file maintenance, you can do it here<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdI7HoxLNQx2rDC2HjZgmWz9jCDYEvwy4oQpoE_h5m9VYZpBtyxCrykIpFkCuLJIAplBde_w3ppljZgQgKhAai8wL3nJof02ZMgIQs7b2d6Y0b_GvLsIwldlxcp6wvyP53QS2dFzqIEuEJL3mI4Im1hEXXpGcmlgNAZ_3wLCLFdcPIiHypRujwLv1E/s1008/config_nas_backup_job_8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdI7HoxLNQx2rDC2HjZgmWz9jCDYEvwy4oQpoE_h5m9VYZpBtyxCrykIpFkCuLJIAplBde_w3ppljZgQgKhAai8wL3nJof02ZMgIQs7b2d6Y0b_GvLsIwldlxcp6wvyP53QS2dFzqIEuEJL3mI4Im1hEXXpGcmlgNAZ_3wLCLFdcPIiHypRujwLv1E/s320/config_nas_backup_job_8.png" width="320" /></a></div><div><br /></div>You can run scripts pre and post job execution, for example if you want to create a snapshot of the file share before the job runs<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdSoHfCT62eFQJNGPKNi7cdiV0k-09wWyojNhzDEPRYBiwvj7gg9EIGHYwojZWcqBBT_7--SXetvsKSWSOQRQBDT7CAKQrtg44m3kgqJhP9iOZRyb_kG6ACh9JiIUsgUDOusxItdcWBsit2M9zEbQnIUJwJrViHrawVxxK0ugkFz9PnmmiT-cI28Aj/s1008/config_nas_backup_job_9.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdSoHfCT62eFQJNGPKNi7cdiV0k-09wWyojNhzDEPRYBiwvj7gg9EIGHYwojZWcqBBT_7--SXetvsKSWSOQRQBDT7CAKQrtg44m3kgqJhP9iOZRyb_kG6ACh9JiIUsgUDOusxItdcWBsit2M9zEbQnIUJwJrViHrawVxxK0ugkFz9PnmmiT-cI28Aj/s320/config_nas_backup_job_9.png" width="320" /></a></div><div><br /></div>In case you would like to get warnings in the job about skipped files, configure here<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEja8xJe0pHuBDcwdndjkUo1IDfPG46nNZKR0od0gEaSaorGi05vcKcAzSwFLUlBD1Iv5ntsz0KAzzHEYvJUOxfQfCg3OOByxM01vkKsnGjhyRQr8moDfa3SPyPzSTq8QFXq6IkHK2MocMx6nIr0sDnx50BDcNsTrXXXPWnksROJ75nOWlN7bsOQQiWq/s1008/config_nas_backup_job_10.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEja8xJe0pHuBDcwdndjkUo1IDfPG46nNZKR0od0gEaSaorGi05vcKcAzSwFLUlBD1Iv5ntsz0KAzzHEYvJUOxfQfCg3OOByxM01vkKsnGjhyRQr8moDfa3SPyPzSTq8QFXq6IkHK2MocMx6nIr0sDnx50BDcNsTrXXXPWnksROJ75nOWlN7bsOQQiWq/s320/config_nas_backup_job_10.png" width="320" /></a></div><div><br /></div>Because we use S3 compatible, we are asked about helper appliance. We will skip it.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnk3EB-lXLQHWqLuVg3c7kdm3OmFnZMQvNbYSaCxIeHnba6CDe-l9enRSxucY5XRJaqh3LfJEAyfl94ylkk6ZIiHekvmBgf9eo-fiYuAGGE7aEAairx1DSTdHpaMMc4LaYq4zcsIp8-Xp7z6rg-iaEhz4poPE9W0jVngRSqpUo8AM1KEnB1mT7jrEK/s1392/config_nas_backup_job_11.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="160" data-original-width="1392" height="37" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnk3EB-lXLQHWqLuVg3c7kdm3OmFnZMQvNbYSaCxIeHnba6CDe-l9enRSxucY5XRJaqh3LfJEAyfl94ylkk6ZIiHekvmBgf9eo-fiYuAGGE7aEAairx1DSTdHpaMMc4LaYq4zcsIp8-Xp7z6rg-iaEhz4poPE9W0jVngRSqpUo8AM1KEnB1mT7jrEK/s320/config_nas_backup_job_11.png" width="320" /></a></div><div><br /></div>On archival page we selected another S3 compatible storage, this time in AWS. We will AWS S3 bucket to hold a copy of the primary storage and also to move there any files older than 3 months<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgG0TuEyDlejmoFbQ27s1cOBzFc8tNXOjuSu0s7wNyoPArfVrr3w8OE1DYQ32IyrOWsm0OrlCbtWooTGrnEiA6cHyp8dtEukKahpIbbRuGITtQGKb-54WzJMESUVdAbnwV7WVMf7azemLmymh6vNP33sfR6z5vdIxo93J5U3zXSS2mProdKfWei4hWH/s1008/config_nas_backup_job_12.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgG0TuEyDlejmoFbQ27s1cOBzFc8tNXOjuSu0s7wNyoPArfVrr3w8OE1DYQ32IyrOWsm0OrlCbtWooTGrnEiA6cHyp8dtEukKahpIbbRuGITtQGKb-54WzJMESUVdAbnwV7WVMf7azemLmymh6vNP33sfR6z5vdIxo93J5U3zXSS2mProdKfWei4hWH/s320/config_nas_backup_job_12.png" width="320" /></a></div><div><br /></div>Here you can filter out files that are sent to acrhive<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_FetfaUj9bgfG0fKz83rxTdZ7JiZFctRakvONS4qswB-oRmmnqy948LtdWNOcsUfRBWjvw2C8hLe2vEQT8IuR7hUEKfHL-czcxCyHFPFy0Bk-6ByEHqy-728DtWz8Ze_yhbUpjE27aYwkm534UhTKJl4wREfGGzX8AZposvLzA7V8B3cwUUWNKaXh/s1008/config_nas_backup_job_13.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_FetfaUj9bgfG0fKz83rxTdZ7JiZFctRakvONS4qswB-oRmmnqy948LtdWNOcsUfRBWjvw2C8hLe2vEQT8IuR7hUEKfHL-czcxCyHFPFy0Bk-6ByEHqy-728DtWz8Ze_yhbUpjE27aYwkm534UhTKJl4wREfGGzX8AZposvLzA7V8B3cwUUWNKaXh/s320/config_nas_backup_job_13.png" width="320" /></a></div><div><br /></div>Finally, schedule the backup job<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6ox99rogOfHPJrTESwuFge89hwZMDotyXRzpMtNgw8HhKMGFpgYPR_DWazIU_R8bTWaliFMcCxxdBNyCeLNQSzOMn_E4bjyXm9hH1794UmI_Wcad5Dsm7n900_zAwMaKMDbWrjjDeEKgeOW--XCZE6AURehaGviM15UqTIP4XzNPRRJ6m9Y38JHVX/s1008/config_nas_backup_job_14.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6ox99rogOfHPJrTESwuFge89hwZMDotyXRzpMtNgw8HhKMGFpgYPR_DWazIU_R8bTWaliFMcCxxdBNyCeLNQSzOMn_E4bjyXm9hH1794UmI_Wcad5Dsm7n900_zAwMaKMDbWrjjDeEKgeOW--XCZE6AURehaGviM15UqTIP4XzNPRRJ6m9Y38JHVX/s320/config_nas_backup_job_14.png" width="320" /></a></div><div><br /></div>And run it<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7b0kCoJ0vi3R_CNfewt4PU-_MvCxqyHp1CPVzxqVl9s6UJjJTZNwxBaOdu86j-wx2F6bYQq7QbSvZei-V7C-USsN4AH8PtA0GGjgXyZC7hDeA7ejCe9WnAq5sqJ4rzA5DAjDkm02q4u8L03Id96gBP5e566JKbfHe4cNmzUY3psRneVigmigUC6r-/s1008/config_nas_backup_job_15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7b0kCoJ0vi3R_CNfewt4PU-_MvCxqyHp1CPVzxqVl9s6UJjJTZNwxBaOdu86j-wx2F6bYQq7QbSvZei-V7C-USsN4AH8PtA0GGjgXyZC7hDeA7ejCe9WnAq5sqJ4rzA5DAjDkm02q4u8L03Id96gBP5e566JKbfHe4cNmzUY3psRneVigmigUC6r-/s320/config_nas_backup_job_15.png" width="320" /></a></div><br /><p><br /></p><div class="separator" style="clear: both; text-align: center;"></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-60057803154665225102023-02-15T20:27:00.000+02:002023-02-15T20:27:58.943+02:00A Look at Veeam Backup & Replication v12 NAS Backup - Initial Configuration<p>NAS backup was introduced back in v10. At the time I published a small article about how to backup a VSAN based file share using Veeam Backup & Replication (VBR) that you can read <a href="https://www.sysadminstories.com/2020/06/backup-vsan-7-file-share-with-veeam.html" target="_blank">here</a>. A lot of things changed since v10 in terms of features added to NAS backup. The new release will add support for <b>direct to object storage backup </b>and <b>immutability </b>just to name a couple.</p><p>The architecture stayed fundamentally the same since the release and its main components can be seen in the following diagram : </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfTYtCo5EJEtRDumQSq-xPAWFfWLAIP2mvgrcbF-3U5HhwQwTh0AawCeJuiIPDNkGrvlt9DQztTLaQL3E9JIr9SKTC8WUhQavDp9x5Ed_OIaynl5zwKD_uDTNg7_XcJ45aQ5lXfjZk2RLH8IMZd6HYPYWt5Rycyjm7qyZZupkkrRAvdWQWgrXv7bhZ/s601/nas_backup_architecture.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="459" data-original-width="601" height="244" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfTYtCo5EJEtRDumQSq-xPAWFfWLAIP2mvgrcbF-3U5HhwQwTh0AawCeJuiIPDNkGrvlt9DQztTLaQL3E9JIr9SKTC8WUhQavDp9x5Ed_OIaynl5zwKD_uDTNg7_XcJ45aQ5lXfjZk2RLH8IMZd6HYPYWt5Rycyjm7qyZZupkkrRAvdWQWgrXv7bhZ/s320/nas_backup_architecture.png" width="320" /></a></div><br /><p><b>File share </b>- storage device where our protected data (files) resides. </p><p><b>File proxy </b>- backup proxy that runs the data mover service, reads data from the file share and sends it to the backup repository.</p><p><b>Cache proxy </b>- dedicated component in NAS backup architecture to keep metadata about the protected sources. The cache holds temporary data and is used to reduce the load on the source during backups. </p><p><b>Backup repository </b>- storage location for the backups </p><p><b>Backup server - </b>controls and coordinates jobs and resource allocation</p><p>In a large environment the components will be sized and distributed accordingly on different machines. The lab setup is a bit different: All Veeam components on the same machine. As source we have installed a <a href="https://www.truenas.com/docs/core/gettingstarted/install/" target="_blank">True NAS VM </a> serving 2 NFS file shares </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTnIlM_utmhDX5ABtRxW3v2QaJ1TbZpeWmcgEDJ1yzgsAIgyHxMBu4tLWziQNnB41TEDRgfu9YJlpfob6EyynmtLPtxPqKUzZ9IRTOHL1N29d6YyglhHi5vHDDENq0v49-FbUqENv4aIcP1nwXkvcsb-kziQ6aT3Lc0aGup57RpKqBO-DaQa5QWw21/s549/lab_setup_single_vbr.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="464" data-original-width="549" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTnIlM_utmhDX5ABtRxW3v2QaJ1TbZpeWmcgEDJ1yzgsAIgyHxMBu4tLWziQNnB41TEDRgfu9YJlpfob6EyynmtLPtxPqKUzZ9IRTOHL1N29d6YyglhHi5vHDDENq0v49-FbUqENv4aIcP1nwXkvcsb-kziQ6aT3Lc0aGup57RpKqBO-DaQa5QWw21/s320/lab_setup_single_vbr.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><div style="text-align: justify;"><span style="text-align: left;">To populate the file shares we've used a simple bash script similar to the code below. To make it faster, we've used multiple scripts that we ran in parallel. </span></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><span style="text-align: left;">
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: #888888;">#!/bin/bash</span>
<span style="color: #888888;">for i in {1..100000}</span>
<span style="color: #888888;">do</span>
<span style="color: #888888;"> head -c 2K /dev/urandom > /mnt/filer/lots_of_files/2K_file_$i</span>
<span style="color: #888888;">done</span>
</pre></div>
<br /></span></div><div style="text-align: justify;"><span style="text-align: left;">The lab setup has limitations coming from ethernet connectivity between hosts, </span><span style="text-align: left;">Intel NUC</span><span style="text-align: left;"> resources (</span><span style="text-align: left;">SCSI interfaces of the local datastores, CPU and RAM) and resources allocated to the VMs. </span></div><div style="text-align: justify;"><div><br /></div></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><b>VBR Configuration</b></div><div style="text-align: justify;">We will start preparing VBR for NAS backups. Add the required Veeam roles: NAS backup proxy and cache repo. Then we will connect the NFS shares to the server. For brevity of the post, steps in the wizard where no changes are done will be skipped. </div><div style="text-align: justify;"><b><br /></b></div><div style="text-align: justify;"><b>NAS Backup Proxy</b></div><div style="text-align: justify;">In our scenario with all in one VBR deployment, the NAS backup proxy role will be installed by default on the backup server.</div><div style="text-align: justify;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjh05kmKPCzXpOiEYFY0GUSz0cl6xfBiOoWRJPcHk4wVHhht0WxdaBcG9J7Bub0osMmDflF2mrJa9Y4XidgfrQ5iciedGAke7U6yhPefk2wTdgT4OfrwktYr8iJRNkxXXj7JW9l7LDeVgZ9mJJsoqEzEhlmnbQ_HV7IlfJcwNBc7hw7Inb-1n-mpMsp/s1201/nas_backup_proxy.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="162" data-original-width="1201" height="43" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjh05kmKPCzXpOiEYFY0GUSz0cl6xfBiOoWRJPcHk4wVHhht0WxdaBcG9J7Bub0osMmDflF2mrJa9Y4XidgfrQ5iciedGAke7U6yhPefk2wTdgT4OfrwktYr8iJRNkxXXj7JW9l7LDeVgZ9mJJsoqEzEhlmnbQ_HV7IlfJcwNBc7hw7Inb-1n-mpMsp/s320/nas_backup_proxy.png" width="320" /></a></div><br /><div style="text-align: justify;">In case you would add the proxy role(s) on different machines, then you would need to start the "<b>Add proxy" </b>wizard from "<b>Backup Infrastructure</b>"</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Select "General purpose backup proxy" </div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlvn23FZchAkvb5746mICGjn1fA1a-G_zyYPzxjRCoAZVOpVKgKMP3F93v1TflchLSvlaJ_QWp9n0kOQLbTvUn2M_YGHdqbzlp9Emi5-PC3Py-4HY9lOhE9Q3sfCuPKKPmgrN9CrBGPeH2mhSadrq1x_-Asb_rkGNnycZ9QC6R7hNjye2nLaE4Td7E/s710/add_nas-proxy.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="610" data-original-width="710" height="275" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlvn23FZchAkvb5746mICGjn1fA1a-G_zyYPzxjRCoAZVOpVKgKMP3F93v1TflchLSvlaJ_QWp9n0kOQLbTvUn2M_YGHdqbzlp9Emi5-PC3Py-4HY9lOhE9Q3sfCuPKKPmgrN9CrBGPeH2mhSadrq1x_-Asb_rkGNnycZ9QC6R7hNjye2nLaE4Td7E/s320/add_nas-proxy.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Then choose your server to which you want to assign NAS proxy role. If the server you want is not in the list, press <b>Add new </b>to start adding it as a managed server. We've only lowered the number of max concurrent tasks due to lab resources availability. Normally you keep default settings. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrSBN8fbNqaPqcYU92uAYaGTv21_43fxXu_hyzdEcj98aLEDWsPt2KI_W18dHGaqsGUek0v8biK3DxQsgmEdzKm7vMpSlEq1N3o7cWk_STmbxagLeGNgw80mRrrQ7EP9CLJ9ymOAfGt71RHAunYstpwYIFIPwo8lzzjLnoE0Opi6xs3B2dRGUGYMiD/s1008/add_nas-proxy_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrSBN8fbNqaPqcYU92uAYaGTv21_43fxXu_hyzdEcj98aLEDWsPt2KI_W18dHGaqsGUek0v8biK3DxQsgmEdzKm7vMpSlEq1N3o7cWk_STmbxagLeGNgw80mRrrQ7EP9CLJ9ymOAfGt71RHAunYstpwYIFIPwo8lzzjLnoE0Opi6xs3B2dRGUGYMiD/s320/add_nas-proxy_2.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Next follow the wizard's screens and install the proxy role. </div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh54bKumBbSLZQkusWreu-RZ6H-IwKe7mAHa-h_X0muntcNboj0t6_-0k1WH6m03gryvJ-R6CrOgLj_LnsAGo-Roy_vXiV8M2EA1XVOnZ5a6vEQ2bjySoCHMbvkEUdB_orUvNkYPbCOGIiu3uKtSYp09tvwzbQqOIwzDAC8oENGYfOfLOGQZfauKSbd/s1008/add_nas-proxy_6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh54bKumBbSLZQkusWreu-RZ6H-IwKe7mAHa-h_X0muntcNboj0t6_-0k1WH6m03gryvJ-R6CrOgLj_LnsAGo-Roy_vXiV8M2EA1XVOnZ5a6vEQ2bjySoCHMbvkEUdB_orUvNkYPbCOGIiu3uKtSYp09tvwzbQqOIwzDAC8oENGYfOfLOGQZfauKSbd/s320/add_nas-proxy_6.png" width="320" /></a></div><br /><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><b>Add Cache Repository</b></div><div style="text-align: justify;"><b><br /></b></div><div style="text-align: justify;">Cache repo role should be as close as possible to proxy and source. It holds metadata in memory, hence having a fast disk is not mandatory. In lab we install it on backup server VM, but in a prod environment you should move it to another machine to avoid resource competition (or size you backup server to accommodate the cache repo). To create a cache repo, go to <b>Backup Repositories </b>and start <b>Add Backup Repository </b>wizard and follow the wizard. No special configuration is needed. </div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Select direct attached storage</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdVjP8m00Foahs-1KpJCT0Z3AHFnm8dLPDcHcnvKY0fq6gI0J8d-bm_DRR5nE3zUtm2a-eEEoc7dj3N_M7BUjP-BdvLCq6HKYZMqqgRZwFvmQuTDaGzAYkAB3P_E9Ma6DPtd-SiEt305VFJcP-dQKD0WZGVXxPui9cghxowwNHqTL--mpIK-OMF_xi/s710/add_cache_repo.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="610" data-original-width="710" height="275" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdVjP8m00Foahs-1KpJCT0Z3AHFnm8dLPDcHcnvKY0fq6gI0J8d-bm_DRR5nE3zUtm2a-eEEoc7dj3N_M7BUjP-BdvLCq6HKYZMqqgRZwFvmQuTDaGzAYkAB3P_E9Ma6DPtd-SiEt305VFJcP-dQKD0WZGVXxPui9cghxowwNHqTL--mpIK-OMF_xi/s320/add_cache_repo.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Select repo OS</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6DsoNIAcLdWgE_dZRRu5zUyGmFfvDZqbbU6Xma3w8tFFhG7ck8jT7RmMEU-0HpQ74u7uEZMsmhlAXoi0BwRsZ8OJh_fOSVG5qOBMIYyx6STPS3QIXQ4-BLsj4Q1tk7cVvFowz1mtOz6gGoUoYGGR_eB4Asdb9mT2zsU2rmAw9B7acNi9l2taZn74T/s710/add_cache_repo_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="610" data-original-width="710" height="275" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6DsoNIAcLdWgE_dZRRu5zUyGmFfvDZqbbU6Xma3w8tFFhG7ck8jT7RmMEU-0HpQ74u7uEZMsmhlAXoi0BwRsZ8OJh_fOSVG5qOBMIYyx6STPS3QIXQ4-BLsj4Q1tk7cVvFowz1mtOz6gGoUoYGGR_eB4Asdb9mT2zsU2rmAw9B7acNi9l2taZn74T/s320/add_cache_repo_2.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">Give the repo a name</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCtjbZ6dozl17a_oKQ_nF5QT56sLrTBhGff9TlyF-zPkK9ZT-0g4H2uUHbfgbeNRxlKywBUjqk-nQ225l_-XyxV8yHNWziejNlNQVoy7oZXndueP0iYnFfneQlLzgHbITIkLX-Y5QCNOXe3419QuYs9I9dajaWF6cvVmW1EdY25LWTRctqDXByvKnX/s1008/add_cache_repo_3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCtjbZ6dozl17a_oKQ_nF5QT56sLrTBhGff9TlyF-zPkK9ZT-0g4H2uUHbfgbeNRxlKywBUjqk-nQ225l_-XyxV8yHNWziejNlNQVoy7oZXndueP0iYnFfneQlLzgHbITIkLX-Y5QCNOXe3419QuYs9I9dajaWF6cvVmW1EdY25LWTRctqDXByvKnX/s320/add_cache_repo_3.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Select the managed server where to install the cache repo and disk location for the repo (press Populate button). It needs only a few GB of free space. If the server you want is not in the list, press <b>Add new </b>to start adding it as a managed server. </div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyhCju8bSrdgd0lJ5mpvPVVCMO6XLUrLY4-j2Gi27jJV7ozj4ufh-R15-gnUtQIutjqA1PxNLQ0P1AMUKxiGxWkkV5I8H8ikXOP-h9JvRtaAZ8DIwjIwIdpUnvShi_Vj0JmgexdRiyQVpBgGzSXO9vIMlWwQibljPurVWBIbEhvsA_HbtH022sF-fk/s1008/add_cache_repo_4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyhCju8bSrdgd0lJ5mpvPVVCMO6XLUrLY4-j2Gi27jJV7ozj4ufh-R15-gnUtQIutjqA1PxNLQ0P1AMUKxiGxWkkV5I8H8ikXOP-h9JvRtaAZ8DIwjIwIdpUnvShi_Vj0JmgexdRiyQVpBgGzSXO9vIMlWwQibljPurVWBIbEhvsA_HbtH022sF-fk/s320/add_cache_repo_4.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQYl3K0MR-fjhYrL4OSlmkTJjd-NLGmTbzU0_FfJ2xKYE6igchDMRA-7DDuwADOC8QHQOfzDTKf-S0el7UGVpmr0jTlQsBUftamQnhpUz5jm86ikC18-icWFA9mqAkVKCE7sv43qf8UK_RoYJREXJq3ccZx88Sjl74UQx4fmo91Sr4dPvIGJzUz1D0/s1008/add_cache_repo_5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><br /></a>Choose the folder for the repo</div></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUVG-u5BXpw8F8t7IYxO7YqzCTyXaesNx5NQAUBGxcYCuFWYYi-4xzPuXGfruVGzRjT8f9oyBRVnfc0B9WzT-Xa1UdoySNSRC02XeLP4UJVeOobAUG7wP3gjTBqcT99AuLbnAKWqh_MqQVjVw7ITnlbgDq7KPLlwGqSy-oPg3Dmx5kkV4_P5jPtCyb/s1008/add_cache_repo_5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUVG-u5BXpw8F8t7IYxO7YqzCTyXaesNx5NQAUBGxcYCuFWYYi-4xzPuXGfruVGzRjT8f9oyBRVnfc0B9WzT-Xa1UdoySNSRC02XeLP4UJVeOobAUG7wP3gjTBqcT99AuLbnAKWqh_MqQVjVw7ITnlbgDq7KPLlwGqSy-oPg3Dmx5kkV4_P5jPtCyb/s320/add_cache_repo_5.png" width="320" /></a></div><div><br /></div>Use the default settings until the end of the wizard. It will install the transport service, <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7T0GOtqW0b5ErVpTuU6lYjpmwqAj67FB2Ycy1hVzK8vikO1ZVQmSohq_l_HGA1_sVmmYEFwg5vpLu3eQLhuzLJPYDl5Yy1M0CdTtVMwc_tqS1kwF8N3zAURjNQZ0x-eleB9bRLxKyspdpVh1ImAkiBL5XXYHQYR0mMUDl7PGediIMYR_o-zCjp9hw/s1008/add_cache_repo_7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7T0GOtqW0b5ErVpTuU6lYjpmwqAj67FB2Ycy1hVzK8vikO1ZVQmSohq_l_HGA1_sVmmYEFwg5vpLu3eQLhuzLJPYDl5Yy1M0CdTtVMwc_tqS1kwF8N3zAURjNQZ0x-eleB9bRLxKyspdpVh1ImAkiBL5XXYHQYR0mMUDl7PGediIMYR_o-zCjp9hw/s320/add_cache_repo_7.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><b>Add file share</b><div>We are using NFS shares served from a TrueNAS VM. We'll go to "<b>Inventory</b>"<b> </b>and start "<b>Add File Share</b>"</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8ezi3LK5_pF6m9QX4c9TLz5gIfoD7rKvahMFg9rHC5yuTaxh69VjZUwQSr-jSASAgy64715YjIMjQZVLxMZsZtqYF8Z8acnn-kNgj2QO268rnrJa_x2m61oSOYa3lBMqTWRdcra7da2dbZjdSTkcSAHMG4f3en4qQ6wZlw620eUuFj31t0DoT8bUO/s347/vbr_add_file_share.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="253" data-original-width="347" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8ezi3LK5_pF6m9QX4c9TLz5gIfoD7rKvahMFg9rHC5yuTaxh69VjZUwQSr-jSASAgy64715YjIMjQZVLxMZsZtqYF8Z8acnn-kNgj2QO268rnrJa_x2m61oSOYa3lBMqTWRdcra7da2dbZjdSTkcSAHMG4f3en4qQ6wZlw620eUuFj31t0DoT8bUO/s320/vbr_add_file_share.png" width="320" /></a></div><div><br /></div><br /><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzh9eJyVUX48nTuY11au0jajCfl2ov-klEt6Vj4kuO_VBV7eXmYN9_53fMcFbZ0olh879t3DReIuKTTdc5mCPfAF1lj1h_ckb-dAK74pMZoTn5VoVWoi4ivffsZ2OYBs1y1ssaHxbxGrad2SnTouvXESHOW4_vQzBOCQ5GbJJX11T9BOkgKpb_r_BA/s710/vbr_add_file_share_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="490" data-original-width="710" height="221" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzh9eJyVUX48nTuY11au0jajCfl2ov-klEt6Vj4kuO_VBV7eXmYN9_53fMcFbZ0olh879t3DReIuKTTdc5mCPfAF1lj1h_ckb-dAK74pMZoTn5VoVWoi4ivffsZ2OYBs1y1ssaHxbxGrad2SnTouvXESHOW4_vQzBOCQ5GbJJX11T9BOkgKpb_r_BA/s320/vbr_add_file_share_2.png" width="320" /></a></div><div><br /></div>Enter the NFS path to the share. Here you can also select advanced settings on how to process the file share - directly from the filer or using a snapshot of the share<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGex3Dokd46KGQdwdyjdDoaLhPcaHX9hDVM2gInyXxmhffv1D_LVCFy7FKaI6M_bfJJ0Er4bH0eurnvR3I4BFb4gCsBP-7czT6U4Kf-PESP5KecaXaKmD22cyAosy92lR1Zdwrt8PYhNI_sWBkGx-Mtm-Cigttww1Cj4nhXkiupOsXo03R3bl94ZMk/s1008/vbr_add_file_share_3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGex3Dokd46KGQdwdyjdDoaLhPcaHX9hDVM2gInyXxmhffv1D_LVCFy7FKaI6M_bfJJ0Er4bH0eurnvR3I4BFb4gCsBP-7czT6U4Kf-PESP5KecaXaKmD22cyAosy92lR1Zdwrt8PYhNI_sWBkGx-Mtm-Cigttww1Cj4nhXkiupOsXo03R3bl94ZMk/s320/vbr_add_file_share_3.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivYpVu1usL0htRp3maMzVufrAZWJ7G0Vbq64R30vyn2lw4d-vl1BFISlIbUl1nTmsdjv5Vqp4roorTsW-Zk5cImJkeVxTui1jmp68iMPnGeHKDqGBkDJ3Uo8dfO3TpD0_f4rDAtkA_O0V7csTSmis1AJN6LRzJNnepEw45k6AaOIX7-A3NlzyPU8zZ/s1008/vbr_add_file_share_4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivYpVu1usL0htRp3maMzVufrAZWJ7G0Vbq64R30vyn2lw4d-vl1BFISlIbUl1nTmsdjv5Vqp4roorTsW-Zk5cImJkeVxTui1jmp68iMPnGeHKDqGBkDJ3Uo8dfO3TpD0_f4rDAtkA_O0V7csTSmis1AJN6LRzJNnepEw45k6AaOIX7-A3NlzyPU8zZ/s320/vbr_add_file_share_4.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Next select to use any available proxy or specific one. In case you have a distributed architecture with multiple sites, you would select here the proxies closer to you share. Also, the cache repo and how aggressive you want the backup job to be are configured on the same step.</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzig5z5sapAxGoMLgk9D64dR8NWYHF_psyLaOi5Zs2LSnUY_hudVkgfJyKqzzVf_HrhE_VpI1Mw3yN9POxCEW0lAXGC4vLMsTjpMMRPTamp-nW8xUJ6Ybfbdy4Fe5eiyF6PsBjZusCgKqy7hN6I-zobncPUZ4hor_b-IsALLcmCYTpw8RDef6HoYTJ/s1008/vbr_add_file_share_5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzig5z5sapAxGoMLgk9D64dR8NWYHF_psyLaOi5Zs2LSnUY_hudVkgfJyKqzzVf_HrhE_VpI1Mw3yN9POxCEW0lAXGC4vLMsTjpMMRPTamp-nW8xUJ6Ybfbdy4Fe5eiyF6PsBjZusCgKqy7hN6I-zobncPUZ4hor_b-IsALLcmCYTpw8RDef6HoYTJ/s320/vbr_add_file_share_5.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: left;">Finally, install Veeam components and finish adding the share.</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQgAW10U_zPxeaQy9CZhMJeSnIlSB-LlavOH3qB7S3z_mj2Zx8q1fn5uuCJVYapuvxLXIrYv4IXNOftRC16yrqCm4br-uUHC6i9UvvBS5Wy1yT1Vhg6MJT90r8N55SjFSFHiEqc1leEm247AphVfRwHtOgldNbMT0P7u2hUXH9HU3EHCpiJwV64QO-/s1008/vbr_add_file_share_6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="787" data-original-width="1008" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQgAW10U_zPxeaQy9CZhMJeSnIlSB-LlavOH3qB7S3z_mj2Zx8q1fn5uuCJVYapuvxLXIrYv4IXNOftRC16yrqCm4br-uUHC6i9UvvBS5Wy1yT1Vhg6MJT90r8N55SjFSFHiEqc1leEm247AphVfRwHtOgldNbMT0P7u2hUXH9HU3EHCpiJwV64QO-/s320/vbr_add_file_share_6.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: left;"><br /></div><b>Backup repository </b><div>From the point of view of a backup repository there is no difference in v12 between a NAS job and a VM image one. Any available repository can be used now including direct to object storage and hardened repository. Just pick your favorite. </div><div><b><br /></b><div>Now you are ready to create the backup job - in this post A Look at Veeam Backup & Replication v12 NAS Backup - Creating file share backup job</div><div><br /></div></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-61126270506161041592023-02-01T21:52:00.003+02:002023-02-01T21:52:24.109+02:00Five Reasons to Monitor Your Infrastructure with Veeam ONE<p> Veeam ONE (VONE) is the monitoring tool for your backup environment and not only. You can use it as well to gain visibility into your virtual infrastructure. And when you think at the features it provides out of the box such as proactive alerting, monitoring, reporting, capacity planning, chargeback, intelligent automation and diagnostics you understand the value it brings to your environment. But what got me so excited about Veeam ONE? Well, it was this:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCa6-cjz9nNsk9yEFG3eOjenpaW1sBHwrMI-8mSLGl5Dbw7Diz2JmhKaHDXr9h1Z1WFZ5lkHojO3cm98yfFSyxxcRNAd-lNHM4TfqF8qXvkhG0pKFoaI0YX0B8-B9n9FueMVA_1ozQdaoXMUrTJG5E6gYFlcnuer1WKicimJVfI7Cnn-uNgTwRVo_E/s1166/vbr_vone_alerts.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="314" data-original-width="1166" height="86" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCa6-cjz9nNsk9yEFG3eOjenpaW1sBHwrMI-8mSLGl5Dbw7Diz2JmhKaHDXr9h1Z1WFZ5lkHojO3cm98yfFSyxxcRNAd-lNHM4TfqF8qXvkhG0pKFoaI0YX0B8-B9n9FueMVA_1ozQdaoXMUrTJG5E6gYFlcnuer1WKicimJVfI7Cnn-uNgTwRVo_E/s320/vbr_vone_alerts.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both;">It is a screen shot from VONE monitoring my lab environment. It's not the fact that there are alarms that got my attention. It is actually what each of those alarms is describing.</div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;">Let's look closer. Veeam ONE is monitoring in this case Veeam Backup & Replication server from my personal lab. Upon a quick look at the screenshot you will notice there are 5 different issue displayed. Some are errors, others just warnings. In both cases, these would prove to be critical if ignored in a production environment.</div><div><br /></div><b>1. Backup repository connection failure </b><div><br /></div><div>My scale out backup repository has a capacity tier in Google Cloud Storage. The S3 bucket is not accessible anymore. This alarm triggers by default when the repo is not accessible for more than 5 minutes. </div><div><br /></div><div><b>2. Backup job state </b></div><div><br /></div><div>This alarm is looking at the state of all backup jobs. The backup job has ended with a error because the vSphere tags where deleted from vCenter Server. So no more backups for those VMs. </div><div><br /></div><div><b>3. Suspicious incremental backup size</b></div><div><br /></div><div>This is one of the out of the box alarms that can help you in case of a ransomware attacks. It looks at changes in size between incremental backups and it triggers to let you know you should further investigate what's happening. </div><div><br /></div><div><b>4. Job disabled</b></div><div><br /></div><div>There are disabled backup jobs. This can be on purpose or by mistake. In any situation, as a backup admin you would like to know if there are any and which they are. The predefined time before the alarm is triggered is 12 hours. More, this alarm has remediation actions. You just need to press "Approve Action" enter a comment and Veeam ONE will enable back the job. How cool is that!</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLPuF1bsCqp2Ee74lDucsdKbtG122djKYOYs9OB2gPhXE2lLfvq273x4cKwbSgngYK3DdoXYAsDQhH2E8aN3oZGRMeuXOM17WEPYej0NagSmR140PuCCLFah6sdiTTFddkUcEXUotT4mc8YmU2HmU9SbhPSvaUZy8aBq51lNlffGgcKWD3aVdCNX-1/s1391/approve_remediation_action.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="161" data-original-width="1391" height="37" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLPuF1bsCqp2Ee74lDucsdKbtG122djKYOYs9OB2gPhXE2lLfvq273x4cKwbSgngYK3DdoXYAsDQhH2E8aN3oZGRMeuXOM17WEPYej0NagSmR140PuCCLFah6sdiTTFddkUcEXUotT4mc8YmU2HmU9SbhPSvaUZy8aBq51lNlffGgcKWD3aVdCNX-1/s320/approve_remediation_action.png" width="320" /></a></div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;">Once the action is executed successfully and the job enabled in VBR, the status changes to let you know</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjxqP_xq1W1QTRwbPyJAHJMh9W0vfCA_Esx9hLfkz7l9FpauJ7uRdm_z-BJPp-IGxauAySfMvZwqyVpZzJ8mBqKWEvf53Zoopwt7hfLY7EdobypGJu3kH5KcOUhA9WgadLOlGxNuiXhn_pJS2TYsCqjahqIP8yFG4vzdZnylcgHTa8h_-Fsq-fp66i/s1155/remediation_action_successful.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="87" data-original-width="1155" height="24" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjxqP_xq1W1QTRwbPyJAHJMh9W0vfCA_Esx9hLfkz7l9FpauJ7uRdm_z-BJPp-IGxauAySfMvZwqyVpZzJ8mBqKWEvf53Zoopwt7hfLY7EdobypGJu3kH5KcOUhA9WgadLOlGxNuiXhn_pJS2TYsCqjahqIP8yFG4vzdZnylcgHTa8h_-Fsq-fp66i/s320/remediation_action_successful.png" width="320" /></a></div><br /><div class="separator" style="clear: both;"><br /></div><b>5. Immutability state</b><div><br /></div><div>Seems that even if I am using S3 compatible storage, immutability flag has not been set. In today's cybersecurity context, this is one small configuration that should be applied to all your repositories. Keep your backups protected from any type of modification. </div><div><br /></div><div><b>Conclusion </b></div><div><b><br /></b></div><div>These are only 5 alarms out of hundreds in Veeam ONE that help you keep your IT infrastructure operating securely. The alarms are all coming out of the box, but you can customize them and create your own. As I stated above, any of the issues highlighted by the alarms will prove critical in case of a situation arising. So it's better to catch and solve them pro-actively. </div><div><br /></div><div><div><br /></div><div><br /></div><div><br /></div><div><p><br /></p></div></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-48620813894452916892023-01-05T10:29:00.002+02:002023-01-05T10:29:29.448+02:00Managing vSphere VM Templates with Packer<p>Packer is an open source tool developed by HashiCorp that lets you create identical images using the same source. It helps in implementing and managing golden images across your organization. I will be using Packer in a vSphere environment only and not be using its multi platform support. The use case I am looking at is managing VM templates applying infrastructure as code concepts. </p><p>The workflow I am implementing is using base VM templates made of basic OS installation, VMware tools and networking connectivity. These base templates do not need any management except for periodic updates/patches. The base VMs then are customized into project specific templates using Packer. The process installs any given project customization such as additional users, software packages, devices and creates a new template to be used as the source for prod deployment. Packer will not replace a configuration management tool, but it will reduce the time to deploy and configure the prod (or running) instances. It is faster to have a prepped template than to wait for packages to install on each of your instances during prod deployment. The diagram below exemplifies the intended process: </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwRTU0Uq9Ha5i2Yh-Y0sDzFMe9ueDIYBnsErfilMxm42z4hOExnLjEA_BZ0M8XPi-lttAvSpyOCrob8x0ozJjxfg8hZXb9YLKCJdmp3R117Vb8THokntnBr6Hp45r2bU3-LDuf-OLZQFcjBMHXOlkhoQq11iQXFLHtUw3BoWnSNchkZG4CIkivdvtq/s772/packer-vm-template-customization-workflow.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="442" data-original-width="772" height="183" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwRTU0Uq9Ha5i2Yh-Y0sDzFMe9ueDIYBnsErfilMxm42z4hOExnLjEA_BZ0M8XPi-lttAvSpyOCrob8x0ozJjxfg8hZXb9YLKCJdmp3R117Vb8THokntnBr6Hp45r2bU3-LDuf-OLZQFcjBMHXOlkhoQq11iQXFLHtUw3BoWnSNchkZG4CIkivdvtq/s320/packer-vm-template-customization-workflow.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">In this workflow, Packer plays a crucial role allowing for fast and repeatable automation of the VM templates based on specific requirements. All credentials are kept in a dedicated secrets manager, called Vault. I will not enter into details about Vault, just keep it in mind as it is used to store any credentials used by Packer. A new set of templates results at the end of the customization process and these are used to run the prod instances.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Packer will also ensure that any changes to the base VM template are tracked and can be repeated in any other infrastructure while they are written in a human readable format. Let's look at a simple example where we modify a CentOS 7 base template. For our project we will use the the following folder structure: </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbk-7ur963P52zngFHbetTYb7MJORKOGrp4lOdWwh6MDAfDlh8pnEto92t5XQ5palC4_BX24c0rb9gZurUQZ9KxWtS25jh8gRJx0WVMi-pQyPZ--6umZy44psx3qoMXim1okDkUnzO9cngn3E_JXsKCOWQjCaHMhha-WO_n9nL-bQuqwYSeu6_jTdV/s349/packer-folder-structure.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="111" data-original-width="349" height="102" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbk-7ur963P52zngFHbetTYb7MJORKOGrp4lOdWwh6MDAfDlh8pnEto92t5XQ5palC4_BX24c0rb9gZurUQZ9KxWtS25jh8gRJx0WVMi-pQyPZ--6umZy44psx3qoMXim1okDkUnzO9cngn3E_JXsKCOWQjCaHMhha-WO_n9nL-bQuqwYSeu6_jTdV/s320/packer-folder-structure.png" width="320" /></a></div><div><br /></div><div>There are 3 files:</div><div><ul style="text-align: left;"><li><b>variables.pkr.hcl</b> - keeps all variable definitions</li><li><b>tmpl-linux.auto.pkrvars.hcl</b> - keeps the initialized input variables and it will be loaded during run; this allows to only change this file when moving to another environment</li><li><b>tmpl-linux.pkr.hcl</b> - main Packer file </li></ul><div>Packer uses HashiCorp Configuration Language (HCL). Let's look at <b>variables.pkr.hcl</b> file contents:</div><div><br /></div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">variable <span style="background-color: #fff0f0;">"vcenter_server"</span> {
<span style="color: #008800; font-weight: bold;">type</span> = <span style="color: #333399; font-weight: bold;">string</span>
description = <span style="background-color: #fff0f0;">"FQDN or IP address of the vCenter Server instance"</span>
}
variable <span style="background-color: #fff0f0;">"build_user"</span> {
<span style="color: #008800; font-weight: bold;">type</span> = <span style="color: #333399; font-weight: bold;">string</span>
description = <span style="background-color: #fff0f0;">"user name for build account"</span>
}
locals {
timestamp = regex_replace(timestamp(), <span style="background-color: #fff0f0;">"[- TZ:]"</span>, <span style="background-color: #fff0f0;">""</span>)
}</pre><pre style="line-height: 125%; margin: 0px;"><br /></pre><pre style="line-height: 125%; margin: 0px;"><pre style="color: #333333; line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;"><pre style="line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;">local <span style="background-color: #fff0f0;">"linux_user_pass"</span> {
expression = vault(<span style="background-color: #fff0f0;">"/kv/data/linux_workshop"</span>, <span style="background-color: #fff0f0;">"${var.ssh_user}"</span>)
sensitive = <span style="color: #008800; font-weight: bold;">true</span>
}</pre><pre style="line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;"><br /></pre></pre></pre><pre style="line-height: 125%; margin: 0px;">local <span style="background-color: #fff0f0;">"build_user_pass"</span> {
expression = vault(<span style="background-color: #fff0f0;">"/kv/data/build_user"</span>, <span style="background-color: #fff0f0;">"${var.build_user}"</span>)
sensitive = <span style="color: #008800; font-weight: bold;">true</span>
}
</pre></div>
</div><div><br /></div><div>There are 2 types of variables - <b>input variables</b> and <b>local variables</b>. Input variables need to be initialized from a default value, command line, environment or variable files (we are using auto.pkvras.hcl file for this). Local variables cannot be overridden at run time and can viewed as some kind of constants. In the example above the real number of variables has been truncated to keep it readable. You can see the input variables such as "vcenter_server" and "build_user". There are also 2 local variables - "timestamp" which is calculated from a function and used in our case in the note field of the VM and "build_user_pass" which keeps the password for our build user and it takes this value from Vault secrets manager. The "build_user_pass" is marked as sensitive which will hide it from the output.</div><div><br /></div><div>Next, let's look the variable initialization file <b>tmpl-linux.auto.pkrvars.hcl</b></div><div><br /></div><div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">vcenter_server = <span style="background-color: #fff0f0;">"vcsa.mylab.local"</span>
build_user = <span style="background-color: #fff0f0;">"build_user@vsphere.local"</span>
</pre></div>
</div><div><br /></div><div>We chose to initialize the variables from a separate file. In it, we just assign values to our input variables. If we need to modify any variable this is the only place where we make the changes which makes it easier to manage. Again, for ease of reading the file has been truncated.</div><div><br /></div><div>Time to see what the <b>tmpl-linux.pkr.hcl </b>file contains. In the customization we'll apply to our template we are looking at two things:</div><div><ul style="text-align: left;"><li>add a new disk to the target image</li><li>install software packages in the target image</li></ul></div><div><br /></div><div>We'll look at each section in the packer fil. First we define the required plugins - in our case vsphere. You can make sure that a certain version is loaded. </div><div><br /></div><div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">packer {
required_version = <span style="background-color: #fff0f0;">">= 1.8.5"</span>
required_plugins {
vsphere = {
version = <span style="background-color: #fff0f0;">">= v1.1.1"</span>
source = <span style="background-color: #fff0f0;">"github.com/hashicorp/vsphere"</span>
}
}
}
</pre></div>
<br /></div><div>Next we define the source block which has the configuration needed by the builder plugin (vsphere plugin loaded above).</div><div><br />
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">source <span style="background-color: #fff0f0;">"vsphere-clone"</span> <span style="background-color: #fff0f0;">"linux-vm-1"</span> {
<span style="background-color: #ffaaaa; color: red;">#</span> vcenter server connection
vcenter_server = <span style="background-color: #fff0f0;">"${var.vcenter_server}"</span>
insecure_connection = <span style="background-color: #fff0f0;">"true"</span>
username = <span style="background-color: #fff0f0;">"${var.build_user}"</span>
password = local.build_user_pass
<span style="background-color: #ffaaaa; color: red;">#</span> virtual infrastructure where we build the templates
datacenter = <span style="background-color: #fff0f0;">"${var.datacenter}"</span>
host = <span style="background-color: #fff0f0;">"${var.vsphere_host}"</span>
datastore = <span style="background-color: #fff0f0;">"${var.datastore}"</span>
folder = <span style="background-color: #fff0f0;">"Templates/${var.lab_name}"</span>
<span style="background-color: #ffaaaa; color: red;">#</span> source template name
template = <span style="background-color: #fff0f0;">"${var.src_vm_template}"</span>
<span style="background-color: #ffaaaa; color: red;">#</span> build process connectivity
communicator = <span style="background-color: #fff0f0;">"ssh"</span>
ssh_username = <span style="background-color: #fff0f0;">"${var.ssh_user}"</span>
ssh_password = local.linux_user_pass
<span style="background-color: #ffaaaa; color: red;">#</span> target image name and VM notes <br /> vm_name = <span style="background-color: #fff0f0;">"tmpl-${var.lab_name}-${var.new_vm_template}"</span>
notes = <span style="background-color: #fff0f0;">"build with packer \n version ${local.timestamp} "</span>
<span style="background-color: #ffaaaa; color: red;">#</span> target image hardware changes
disk_controller_type = [<span style="background-color: #fff0f0;">"pvscsi"</span>]
storage {
disk_size = <span style="color: #008800; font-weight: bold;">var</span>.extra_disk_size
disk_thin_provisioned = <span style="color: #008800; font-weight: bold;">true</span>
disk_controller_index = <span style="color: #0000dd; font-weight: bold;">0</span>
}
convert_to_template = <span style="color: #008800; font-weight: bold;">true</span>
}
</pre></div>
</div><div><br /></div><div>In the source we let the build plugin know how to connect to vCenter Server, what virtual infrastructure to use (datastores, hosts), what is the source template that we will use, how to connect to it and what is the configuration for the target template that we build. At the end we instruct the plugin to convert the newly created image to a VM template. Notice the <b>communicator </b>defined as "ssh". Communicators instruct Packer how to upload and execute scripts in the target image. It supports: none, ssh and winrm. Please mind some builders have their own communicators, such as Docker builder. </div><div><br /></div><div>With the current configuration we can actually define our build process. We've already accomplished half of our customization - adding the new disk is defined in the source block. In the build block we place the configuration that is needed by our build plugin. We will actually use a shell provisioner to install two packages - htop and tree. In my example, shell provisioner is sufficient to do the job, basically run in the target image "yum install". However I would recommend using a proper configuration management tool such as Ansible instead of directly running commands. </div><div><br /></div><div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">build {
sources = [<span style="background-color: #fff0f0;">"source.vsphere-clone.linux-vm-1"</span>]
provisioner <span style="background-color: #fff0f0;">"shell"</span> {
execute_command = <span style="background-color: #fff0f0;">"echo '${local.linux_user_pass}' | sudo -S sh -c '{{ .Vars }} {{ .Path }}'"</span>
inline = [<span style="background-color: #fff0f0;">"yum install tree htop -y"</span>]
}
}
</pre></div>
<br /></div><div><br /></div><div>Notice execute_command - this is a customization of the command we want to run (yum) and we use it to send the sudo password. The password itself is take from the local variable which is initialized with the value from kept in Vault secrets manager (as defined in <b>variables.pkr.hcl</b>).</div><div><br /></div><div>The only thing left to do is to validate your configuration and run the build process.</div><div><br /></div><div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: #888888;">packer validate .</span>
<span style="color: #888888;">packer build .</span>
</pre></div>
<br /></div><div>Please note that variable files in this post have been truncated for ease of reading. If you intend to use this example, you would need to fill in the missing variables and initialize them according to your environment. </div><div><br /></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com1tag:blogger.com,1999:blog-555576322574773333.post-28784452814426978462022-12-18T09:42:00.003+02:002022-12-18T09:52:54.770+02:00vCenter Server 8 Upgrade - Unknown Host Error<p> I've recently upgraded my vCenter Server 7.0.3 to 8.0 and during the process I've encountered the following error: <span style="font-family: courier;">Error in method invocation [Errno 1] Unknown host</span></p><p>The error is related to using IP address instead of FQDN as shown in the image below . It will appear in stage 2, after VCSA VM is deployed in the environment. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDZW8IWxleZPwiudaYyw51PW1NHXI2lHEUx5Aokpt9BBpZHt0CpyhDv4BdighmbkdzFz_KAt5DfT6zVTaOgwJTRPhGtK5pw4HfCbzZMGhMd1HFL9_NCZObpbUa2KClpM6DOG0PW_TX0KlTODsX3Lhe7P6AzQI_AcsqqyckvuiReeAd1htkkKgBXiN5/s864/vcsa-upgrade-stage-2-unknown-host-error.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="655" data-original-width="864" height="243" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDZW8IWxleZPwiudaYyw51PW1NHXI2lHEUx5Aokpt9BBpZHt0CpyhDv4BdighmbkdzFz_KAt5DfT6zVTaOgwJTRPhGtK5pw4HfCbzZMGhMd1HFL9_NCZObpbUa2KClpM6DOG0PW_TX0KlTODsX3Lhe7P6AzQI_AcsqqyckvuiReeAd1htkkKgBXiN5/s320/vcsa-upgrade-stage-2-unknown-host-error.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><p></p><p>To avoid it, just use FQDN everywhere. This error was first mentioned on upgrades from 6.7 to 7.0, but somehow in my lab got ported to upgrade from 7.0 to 8.0. </p>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com3tag:blogger.com,1999:blog-555576322574773333.post-85804799703922846812022-12-08T12:12:00.000+02:002022-12-08T12:12:21.975+02:00VMUG Leader Summit 2022 - How The Times Have Changed<p>It's been 32 months since the last summit took place and how the times have changed. Not sure if anyone would've dared in February 2020 to imagine the shifts to come a few weeks later and that will keep on coming throughout the next 2 and a half years. And it's not only socially and politically, but also technologically. The status quo of the human society was pretty much ignored and we had to learn, adapt and change. </p><p>This week in Lisbon I had the opportunity to meet again the great community of people from all the corners of the world called VMUG. And the people in it is what makes this whole idea great. In 2020 it was the social part that impressed me too. Technologies come and change, trends may shift (again this word) on the way. The people involved in the process are the most important. </p><p>Talking about people we had (as always) a number of select guests from VMware to talk to us. From Joe Baguley's (CTO EMEA) talk about skills gap and how AI and robots are slowly making it to the masses and into your house, to Duncan Epping's journey throughout VMware and the idea that everyone wants to be something, but very few want to make the effort to become that something, it all evolved around the changes that we are living and need to face. And it is scary, especially since the buzz word without any relation to the event, was AI, and more exactly <a href="https://openai.com/blog/chatgpt/" target="_blank">ChatGPT</a>, an OpenAI project that was trained to interact with humans in a conversation like natural language. The ease of usage and accessibility of this AI model is amazing and it will change a lot. But it is still only a tool in our hands. And the way we use it will make a difference. Only two years ago a talk around this subject would've brought a lot of smiles. </p><p>I am a techie, not really great with words so let me wrap it up. As humans we need certainty, connection, contribution. VMUG is a community that brings all of this and more. And it does it through its members, the people that change, learn and grow along with the community. </p>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-168676800878858462022-12-01T12:40:00.000+02:002022-12-01T12:40:19.798+02:00What I've Learned From Using Instant Clones in vSphere<p>Instant clone is a technology to create a powered on VM using as source another running VM. An instant clone VM shares memory and disk state with its source VM. Once it is powered on, the instant clone is a fully manageable independent vCenter Server object. The clones can be customized and have unique MACs, UUID. This makes the technology very appealing for use cases where large number of VMs need to be created in a short time from a controlled point in time - think about VDIs. </p><p>My use case was on-demand labs generated from the same lab template(s). A lab template is made of 3 to 6 VMs of different sizes running interdependent applications. Users login to a web app and then request one or more new labs from the available templates. The web app would start in the background lab provisioning for all the requests via vCenter Server. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpVDLDMNOMfoZrkEhBc8D-j7gf58GMebceUuevABAAG_yOpeoNIQ08BLtPpY4otCAX4FtiYKGFL3tIeUyZY0BFx09cFzmLl4YPRKqcO5LNrG_q-Yn7ZC0ZyPWhV1u6D6YA0T2z7bgVZbmFiuyjc0kzPRSv2yHz4dbvHzJ39Jb_z2P6ort3KCcDZolf/s926/lab_instant_clone.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="248" data-original-width="926" height="86" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpVDLDMNOMfoZrkEhBc8D-j7gf58GMebceUuevABAAG_yOpeoNIQ08BLtPpY4otCAX4FtiYKGFL3tIeUyZY0BFx09cFzmLl4YPRKqcO5LNrG_q-Yn7ZC0ZyPWhV1u6D6YA0T2z7bgVZbmFiuyjc0kzPRSv2yHz4dbvHzJ39Jb_z2P6ort3KCcDZolf/s320/lab_instant_clone.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">Using full clones would have meant a higher load on the systems and also a longer time to wait for a lab to be ready - boot time of the all the VMs in the cloned lab plus time for services to start in guest OS of each VM. Additionally there was no information on how many labs would be requested at a time. There were also multiple source lab templates having a worse case scenarios of tens to hundreds of VMs being requested within a minute. I chose instant clones as the way forward. </div><div><br /></div>When using instant clone there are 2 provisioning workflows: running source VM and frozen source VM, as seen in the picture below taken from <a href="https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/cloning-vSphere7-perf.pdf" target="_blank">Understanding Clones in vSphere 7</a> performance study published by VMware.<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihV0f00IiAOQfPfa4aI-vW5WVWhRryFEn6kM4wzSRYJwgnjM1oaE7s0xh2Y9aB0DA5TEN3lcuJ7RxXwAxL4MvgmunyIKvBJh0_4BmiAbkgO5iOic7TEmZV0UeBO5t9ibGQj8ZE-nublwnYK5CwVEMofgrXOdGvTTCawzacnH_i86zmh8Dtda8amEJX/s1391/instant-clones-provisioning-workflows.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="422" data-original-width="1391" height="97" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihV0f00IiAOQfPfa4aI-vW5WVWhRryFEn6kM4wzSRYJwgnjM1oaE7s0xh2Y9aB0DA5TEN3lcuJ7RxXwAxL4MvgmunyIKvBJh0_4BmiAbkgO5iOic7TEmZV0UeBO5t9ibGQj8ZE-nublwnYK5CwVEMofgrXOdGvTTCawzacnH_i86zmh8Dtda8amEJX/s320/instant-clones-provisioning-workflows.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">In running source VM, a temporary stun is initiated to allow for checkpoint the VM and create the delta disks. Then the source is back to its running state. Each new instant clone will depend on the the shared delta disk potentially hitting the vSphere limit of 255. These delta disks are redo logs and are not tied to snapshot chain, hence not visible in UI. The limit for supported snapshot chain in vSphere is still 32. In case the limit is hit, cloning will fail as described in <a href="https://kb.vmware.com/s/article/67186" target="_blank">KB article 67186</a>. To avoid this limitation, you could use frozen source VM provisioning workflow in which the source is frozen and no longer running and the delta disks are only created for child VMs. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Since the lab templates were actually running different services that did not cope very well with being frozen for longer periods of time, I used running source VM workflow. To create the clones I borrowed and adapted the code from William Lam found here <a href="https://github.com/lamw/PowerCLI-Example-Scripts/blob/master/Modules/InstantClone/InstantClone.psm1" target="_blank">instant clone PowerCLI module</a> (thank you!). He also has some very good articles on the technology. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">What I did not realize at the time is that it will impact the performance of the labs once the number of delta disks increased. The cloned labs were temporary by nature and removed after a specific run time. However the delta disks on the source VMs were not cleaned up and just kept increasing which in the end impacted user experience. So I needed to introduce a cleaning mechanisms. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">The simplest way to clean up source VM was by using an idea that I got from <a href="https://helpcenter.veeam.com/docs/backup/vsphere/snapshot_hunter_hiw.html?ver=110" target="_blank">Veeam Snapshot Hunter</a> and to create a snapshot for the lab template VMs (source VMs) and then immediately initiate a delete all command. This will clean up all the delta disks from the source VMs. The PowerCLI script would run nightly as a scheduled job. </div><div><br /><div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: #996633;">$labPrefix</span> = <span style="background-color: #fff0f0;">"lab-1-*"</span>
<span style="color: #996633;">$vms</span> = <span style="color: #007020;">Get-VM</span> -Name <span style="color: #996633;">$labPrefix</span>
<span style="color: #008800; font-weight: bold;">foreach</span> (<span style="color: #996633;">$vm</span> <span style="color: #008800; font-weight: bold;">in</span> <span style="color: #996633;">$vms</span>) {
<span style="color: #996633;">$snapTime</span> = <span style="color: #007020;">get-date</span> -Format <span style="background-color: #fff0f0;">"MM/dd/yyyy HH:mm"</span>
<span style="color: #996633;">$description</span> = <span style="color: #996633;">$vm</span>.Name + <span style="background-color: #fff0f0;">" "</span> + <span style="color: #996633;">$snapTime</span>
<span style="color: #007020;">New-Snapshot</span> -VM <span style="color: #996633;">$vm</span> -Name <span style="background-color: #fff0f0;">"delta disk cleanup"</span> -Description <span style="color: #996633;">$description</span> -Memory<span style="background-color: #ffaaaa; color: red;">:</span><span style="color: #996633;">$true</span> -Confirm<span style="background-color: #ffaaaa; color: red;">:</span><span style="color: #996633;">$false</span>
<span style="color: #007020;">Get-Snapshot</span> -VM <span style="color: #996633;">$vm</span> -Name <span style="background-color: #fff0f0;">"delta disk cleanup"</span> | <span style="color: #007020;">Remove-Snapshot</span> -RemoveChildren -Confirm<span style="background-color: #ffaaaa; color: red;">:</span><span style="color: #996633;">$false</span>
}
</pre></div>
<br /></div><div>The plan is to test <span style="background-color: white; font-family: "courier new", courier, monospace; font-size: 14px;">Vim.VirtualMachine.PromoteDisk(unlink=True)</span> method in the future.</div><div><br /></div><div>A few take away points:</div><div>- instant clone is a very fast cloning technology and it also optimizes resource usage (memory, disk)</div><div>- if the number of cloned VMs from the same source is very large ( > 200) use frozen source VM workflow</div><div>- when using running source VM, make sure to include a cleanup mechanism of the delta disks</div><div>- time synchronization in the source VMs is very important (as always)</div><div>- if you need full performance, use full clones </div><div><br /></div></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0Bucharest, Romania44.4267674 26.102538416.116533563821157 -9.0537116 72.737001236178855 61.2587884tag:blogger.com,1999:blog-555576322574773333.post-78874551494238301962021-10-19T19:51:00.004+03:002021-10-19T19:58:00.775+03:00Certifications during pandemics - Pearson VUE online proctoring<p>Recently I had the opportunity to take (and pass) 3 certifications and I did it using Pearson's OnVUE online proctoring. Talking to other colleagues of mine I found out there are mixed feelings about the experience. For me it was an overall good experience. So, I've decided to put together a few thoughts about how it went. </p><p><br /></p><p><b>The good </b></p><p>You can schedule the exam anytime you want and you can do it from one day to another. You are home in your office, so it's a familiar space. There is no commute to the test center and back. For me these are the biggest advantages. </p><p><b>The not so good</b></p><p>You have to clean up your desk and disconnect everything. If you have docking station, multiple monitors and other equipment it will be a bit of work to do. If you have other things around your desk (like my old film cameras that I keep as decorations), you will need to move those too. Be prepared to use your webcam to show that cables are unplugged. </p><p>Another thing to take care: no one is allowed to enter the room be it kid, partner or pet. This may prove an inconvenient.</p><p>The app delivering the exam is not optimized for wide monitors. That makes the questions very long and places the buttons in strange positions. But you get used to it, or better use laptop screen. </p><p><b>The weird</b></p><p>The proctor experience can vary a bit. It was fine for 2 exams to use external monitor, not fine during another one. The weirdest thing: I was told during one exam not to look up because that is not allowed and doing it again will fail my exam (!?!). Small issue here: when I try to remember things I involuntarily look up. Luckily I managed to pass the exam without remembering too many things. </p><p><b>Connectivity issues</b></p><p>It happened one morning to take longer to connect and get someone to enter online with me. It took me more than half hour to start the process. But after that all went well. No biggie here, just start on time.</p><p><br /></p><p>Once you get the exam started, the experience is the same like in any test center. I am not sure that I would like to go back to a test center unless absolutely necessary (like looking up during the exam). </p><p> </p><p> </p><p><br /></p><p><br /></p>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-2262040712862656082021-10-06T00:30:00.009+03:002021-10-06T00:30:00.258+03:00What's new in vRealize Automation 8.5.x and 8.6 <p>The latest releases of vRealize Automation bring in a series of interesting features. </p><p><b><br /></b></p><p><b>Cloud Resource </b></p><p>Cloud Resource view was introduced back in May 2021 for vRA Cloud and allows to manage resources directly instead of managing them by resource groups (deployments). It allows now to manage all discovered, onboarded and provisioned deployments, trigger power day 2 actions on discovered resources and bulk manage multiple resources at the same time.</p><p><b><br /></b></p><p><b>ABX enabled deployment for custom resources</b></p><p>Provisioning a custom resource allows you to track and manage the custom resource and its properties during its whole lifecycle. No dynamic types are needed for full lifecycle management. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsNwEfvBCGZUP1tZ6X8B5IKQozhRlT0hTZmGoYZo2oe3sLKlG4NUKPgZzOtdf5QB9d5ruf49dmwzJO46RmfiUb0NlCDEEAafqmAErkm-FAOOiue4psn_E2KI4_Sl_GoGWs95fNi8waomM/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="552" data-original-width="1159" height="152" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsNwEfvBCGZUP1tZ6X8B5IKQozhRlT0hTZmGoYZo2oe3sLKlG4NUKPgZzOtdf5QB9d5ruf49dmwzJO46RmfiUb0NlCDEEAafqmAErkm-FAOOiue4psn_E2KI4_Sl_GoGWs95fNi8waomM/" width="320" /></a></div><br /><br /><p></p><p><b>Cloud Templates Dynamic Input </b></p><p>Use vRO Actions for dynamic external values to define different types of input values directly at the Cloud Template and bind local input to the dynamic inputs as action parameters. </p><p><br /></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1gPfY_wX19fgH-QRgv6F9uvzALz9x3tRycBtmZvgxNbLw5zvGjCAGOlyqeyYSxkdkllgeQ_qPhyEu2fAc91KohkJa9HlE1kMLcsDVgljLWd0RK6e20z0iABlri9gCIy-72-F80Kv5SL8/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="648" data-original-width="627" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1gPfY_wX19fgH-QRgv6F9uvzALz9x3tRycBtmZvgxNbLw5zvGjCAGOlyqeyYSxkdkllgeQ_qPhyEu2fAc91KohkJa9HlE1kMLcsDVgljLWd0RK6e20z0iABlri9gCIy-72-F80Kv5SL8/" width="232" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZ-UWKfwj6cyBTW30r8NIUVfdi1TaOylWUgw_Sd0tqbHD8Uo3OqNNzkEv6VjpXtI2E6weNrM5VphnQBFYTrOdsDAAqEEdolConUw1Bwl32Xz-5zBr4fBKsmFNnFWl4k7ga6OXK_tV1YAo/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="493" data-original-width="646" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZ-UWKfwj6cyBTW30r8NIUVfdi1TaOylWUgw_Sd0tqbHD8Uo3OqNNzkEv6VjpXtI2E6weNrM5VphnQBFYTrOdsDAAqEEdolConUw1Bwl32Xz-5zBr4fBKsmFNnFWl4k7ga6OXK_tV1YAo/" width="314" /></a></div><br /><br /><p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuW4sjjs2u1t-d0mbise1ePPDys8EkTO6U8gBKJ0DYM7MVxwv6qB_28FDOk_M8uF7Ox0QAEw6Ilc2BUAgeffPo4hLAXIXIZhOcApRpqHnCEOBEHtZTMpUC-XJPslJCyO2WGfb6hNBEiU4/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="602" data-original-width="615" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuW4sjjs2u1t-d0mbise1ePPDys8EkTO6U8gBKJ0DYM7MVxwv6qB_28FDOk_M8uF7Ox0QAEw6Ilc2BUAgeffPo4hLAXIXIZhOcApRpqHnCEOBEHtZTMpUC-XJPslJCyO2WGfb6hNBEiU4/" width="245" /></a></div><br /><br /><p></p><p><b>Kubernetes support in Code Stream Workspace</b></p><p>The Code Stream pipeline workspace now supports Docker and Kubernetes for continuous integration tasks. The Kubernetes platform manages the entire lifecycle of the container, similar to Docker. In the pipeline workspace, you can choose Docker (the default selection) or Kubernetes. In the workspace, you select the appropriate endpoint. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5cKfmyorGWktPTOSsyq3reUyjoii2-CtyvekRqpeTR1Pe6-BJD23cPI6h4L_tJT4fbVqPYbeZkrGrrESI_4kd8uH7dIPByor6fvdIUg70K01ouHGQzaV9y0vZ6jipv50n7e7liofXGyQ/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="674" data-original-width="1070" height="202" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5cKfmyorGWktPTOSsyq3reUyjoii2-CtyvekRqpeTR1Pe6-BJD23cPI6h4L_tJT4fbVqPYbeZkrGrrESI_4kd8uH7dIPByor6fvdIUg70K01ouHGQzaV9y0vZ6jipv50n7e7liofXGyQ/" width="320" /></a></div><br />The Kubernetes workspace provides:<p></p><p></p><ul style="text-align: left;"><li>the builder image to use</li><li>image registry</li><li>namespace</li><li>node port</li><li>persistent Volume Claim</li><li>working directory</li><li>environment variables</li><li>CPU limit</li><li>memory limit.</li></ul><p></p><p>You can also choose to create a clone of the Git repository.</p><div><p><b><br /></b></p><p><b>Multi-cloud</b></p><p>vRA leverages Azure provisioning capabilities, including the ability to enable/disable boot diagnostics for Azure VMs for Day 0/2, and the ability to configure the name for the Azure NIC interfaces.</p></div><div><b><br /></b></div><div><b>Other updates and new features </b></div><div><div><ul style="text-align: left;"><li>Native SaltStack Configuration Automation Config via modules for vSphere, VMC, and NSX</li><li>Leverage third-party integrations with Puppet Enterprise support for machines without a Public IP address</li><li>Deploy a VCD adapter for vRA</li><li>Onboard vSphere networks to support an additional resource type in the onboarding workflow</li></ul></div></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-61096713103975280412021-08-26T15:01:00.006+03:002021-08-26T15:01:55.704+03:00VMworld 2021 - Sessions to watch <p><br /></p><p>This year is going to be the second one in a row when I don't get to do my favorite autumn activity: go to VMworld in Barcelona. But I do get to do part of it - attend the virtual <a href="https://bit.ly/3DbHgp3" target="_blank">VMworld 2021</a>. And to make it as close as possible to the real experience, I will most probably add some red Spanish wine and jamon on the side. </p><p>As for the sessions I am looking forward to attend, I will leave here a few of my choices:</p><p><a href="https://myevents.vmware.com/widget/vmware/vmworld2021/catalog?search=MCL1084" target="_blank">VMware vSAN – Dynamic Volumes for Traditional and Modern Applications [MCL1084]</a></p><p>I've been involved recently in projects with Tanzu and vSAN and this session with Duncan Epping and Cormac Hogan is the place to go to see how vSAN continues to evolve, to learn about new features, integration with with Tanzu and hear some of the best practices. </p><p><a href="https://myevents.vmware.com/widget/vmware/vmworld2021/catalog?search=APP1564" target="_blank">The Future of VM Provisioning – Enabling VM Lifecycle Through Kubernetes [APP1564]</a></p><p>A session about what I think is a one of the game changers introduced by VMware this year: <span style="color: #565656; font-family: metropolislight;">include VM-based workloads in modern applications using Kubernetes APIs to deploy, configure and manage them. I've been using working with VM service since its official release in May and also wrote small <a href="https://www.sysadminstories.com/2021/08/vsphere-with-tanzu-vm-operator.html">blog post</a> earlier this month. </span></p><p style="text-align: left;"><a href="https://myevents.vmware.com/widget/vmware/vmworld2021/catalog?search=APP1205" target="_blank">What's New in vSphere [APP1205]</a></p><div>This is one the sessions I never missed. vSphere is still one of the fundamental technologies for all other transformations. I am interested in finding out what are latest capabilities, the customer challenges and real-world customer successes. </div><p style="text-align: left;"><a href="https://myevents.vmware.com/widget/vmware/vmworld2021/catalog?search=CODE2786" target="_blank">Automation Showdown: Imperative vs Declarative [CODE2786]</a></p><div>There is no way to miss Luc Dekens and Kyle Rudy take on the hot topic of imperative versus declarative infrastructure and understanding when and how you can and should use each of it and see practical examples of it.</div><p style="text-align: left;"><a href="https://myevents.vmware.com/widget/vmware/vmworld2021/catalog?search=IC1484" target="_blank">Achieving Happiness: The Quest for Something New [IC1484]</a></p><div>I had the honor to meet Amanda Blevins at VMUG Leaders Summit right before the world decided to close. Her presentation wowed the crowd and it was one of the highest rated. So this is something that shouldn't be missed, especially since the pandemic has been around for 18 months and we need to achieve some happiness. </div><div><br /></div><div>There are hundreds of sessions and the touched areas are so diverse that you can find your picj regardless of your interests in AI, application modernization, Kubernetes, security, network, personal development or plain old virtualization. See you at <a href="https://bit.ly/3DbHgp3" target="_blank">VMworld 2021</a>! </div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-16522125045880799052021-08-20T10:32:00.001+03:002021-08-20T10:32:00.181+03:00vSphere with Tanzu - Create custom VM template for VM Operator<p> We've seen in the previous post how to enable and use VM Operator. We've also noticed that currently there are only 2 VM images that are supported to be deployed using VM Operator. What if we need to create our own image? </p><p>There is a way, but the way is not supported by VMware. So once going this path, you have to understand the risks. </p><p>What is so special about the VM image deployed using VM Operator? It is using <a href="https://cloudinit.readthedocs.io/en/latest/" target="_blank">cloud-init</a> and OVF environment variables to initialize the VM. </p><p>Let's start with a new Linux VM template. We will install VMware Tools. Then we need to install cloud-init. Once cloud init is installed update the configuration as following:</p><p></p><ul style="text-align: left;"><li>in /etc/cloud/cloud.cfg check the following value: <span style="font-family: courier;">disable_vmware_customization: true </span></li><ul><li><span style="font-family: inherit;">setting it to <b>true </b>invokes traditional Guest Operating System Customization script based workflow (GOSC); in case it is set to <b>false</b>, cloud-init customization will be used. </span></li></ul></ul><ul style="text-align: left;"><li><span style="font-family: inherit;">create a new file </span>/etc/cloud/cloud.cfg.d/99_vmservice.cfg and add the following line to it <span style="font-family: courier;">network: {config: disabled};</span></li><ul><li>this will actually prevent cloud-init to configure the network; you guessed, VMware Tools will be used to configure the network</li></ul></ul><div>Before exporting the VM as OVF template, run cloud-init to simulate a clean instance installation. It should be run on subsequent template updates too. </div><div><br /></div><div><span style="font-family: inherit;">Next we'll customize the OVF file itself. We need to enable OVF environment variables to be used to transport data to cloud-init. For this to work, I just copied the configuration from VMware CentOS VM service image ovf file and updated several sections: </span></div><div><span style="font-family: inherit;"><br /></span></div><div>In <span style="font-family: courier;"><VirtualSystem ovf:id="vm"></span>, add the following ovf properties. Please note that you could/should change the labels and descriptions to match your template</div><div><br /></div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: #007700;"><ProductSection</span> <span style="color: #0000cc;">ovf:required=</span><span style="background-color: #fff0f0;">"false"</span><span style="color: #007700;">></span>
<span style="color: #007700;"><Info></span>Cloud-Init customization<span style="color: #007700;"></Info></span>
<span style="color: #007700;"><Product></span>Linux distribution for VMware VM Service<span style="color: #007700;"></Product></span>
<span style="color: #007700;"><Property</span> <span style="color: #0000cc;">ovf:key=</span><span style="background-color: #fff0f0;">"instance-id"</span> <span style="color: #0000cc;">ovf:type=</span><span style="background-color: #fff0f0;">"string"</span> <span style="color: #0000cc;">ovf:userConfigurable=</span><span style="background-color: #fff0f0;">"true"</span> <span style="color: #0000cc;">ovf:value=</span><span style="background-color: #fff0f0;">"id-ovf"</span><span style="color: #007700;">></span>
<span style="color: #007700;"><Label></span>A Unique Instance ID for this instance<span style="color: #007700;"></Label></span>
<span style="color: #007700;"><Description></span>Specifies the instance id. This is required and used to determine if the machine should take "first boot" actions<span style="color: #007700;"></Description></span>
<span style="color: #007700;"></Property></span>
<span style="color: #007700;"><Property</span> <span style="color: #0000cc;">ovf:key=</span><span style="background-color: #fff0f0;">"hostname"</span> <span style="color: #0000cc;">ovf:type=</span><span style="background-color: #fff0f0;">"string"</span> <span style="color: #0000cc;">ovf:userConfigurable=</span><span style="background-color: #fff0f0;">"true"</span> <span style="color: #0000cc;">ovf:value=</span><span style="background-color: #fff0f0;">"centosguest"</span><span style="color: #007700;">></span>
<span style="color: #007700;"><Description></span>Specifies the hostname for the appliance<span style="color: #007700;"></Description></span>
<span style="color: #007700;"></Property></span>
<span style="color: #007700;"><Property</span> <span style="color: #0000cc;">ovf:key=</span><span style="background-color: #fff0f0;">"seedfrom"</span> <span style="color: #0000cc;">ovf:type=</span><span style="background-color: #fff0f0;">"string"</span> <span style="color: #0000cc;">ovf:userConfigurable=</span><span style="background-color: #fff0f0;">"true"</span><span style="color: #007700;">></span>
<span style="color: #007700;"><Label></span>Url to seed instance data from<span style="color: #007700;"></Label></span>
<span style="color: #007700;"><Description></span>This field is optional, but indicates that the instance should 'seed' user-data and meta-data from the given url. If set to 'http://tinyurl.com/sm-' is given, meta-data will be pulled from http://tinyurl.com/sm-meta-data and user-data from http://tinyurl.com/sm-user-data. Leave this empty if you do not want to seed from a url.<span style="color: #007700;"></Description></span>
<span style="color: #007700;"></Property></span>
<span style="color: #007700;"><Property</span> <span style="color: #0000cc;">ovf:key=</span><span style="background-color: #fff0f0;">"public-keys"</span> <span style="color: #0000cc;">ovf:type=</span><span style="background-color: #fff0f0;">"string"</span> <span style="color: #0000cc;">ovf:userConfigurable=</span><span style="background-color: #fff0f0;">"true"</span> <span style="color: #0000cc;">ovf:value=</span><span style="background-color: #fff0f0;">""</span><span style="color: #007700;">></span>
<span style="color: #007700;"><Label></span>ssh public keys<span style="color: #007700;"></Label></span>
<span style="color: #007700;"><Description></span>This field is optional, but indicates that the instance should populate the default user's 'authorized_keys' with this value<span style="color: #007700;"></Description></span>
<span style="color: #007700;"></Property></span>
<span style="color: #007700;"><Property</span> <span style="color: #0000cc;">ovf:key=</span><span style="background-color: #fff0f0;">"user-data"</span> <span style="color: #0000cc;">ovf:type=</span><span style="background-color: #fff0f0;">"string"</span> <span style="color: #0000cc;">ovf:userConfigurable=</span><span style="background-color: #fff0f0;">"true"</span> <span style="color: #0000cc;">ovf:value=</span><span style="background-color: #fff0f0;">""</span><span style="color: #007700;">></span>
<span style="color: #007700;"><Label></span>Encoded user-data<span style="color: #007700;"></Label></span>
<span style="color: #007700;"><Description></span>In order to fit into a xml attribute, this value is base64 encoded . It will be decoded, and then processed normally as user-data.<span style="color: #007700;"></Description></span>
<span style="color: #007700;"></Property></span>
<span style="color: #007700;"><Property</span> <span style="color: #0000cc;">ovf:key=</span><span style="background-color: #fff0f0;">"password"</span> <span style="color: #0000cc;">ovf:type=</span><span style="background-color: #fff0f0;">"string"</span> <span style="color: #0000cc;">ovf:userConfigurable=</span><span style="background-color: #fff0f0;">"true"</span> <span style="color: #0000cc;">ovf:value=</span><span style="background-color: #fff0f0;">""</span><span style="color: #007700;">></span>
<span style="color: #007700;"><Label></span>Default User's password<span style="color: #007700;"></Label></span>
<span style="color: #007700;"><Description></span>If set, the default user's password will be set to this value to allow password based login. The password will be good for only a single login. If set to the string 'RANDOM' then a random password will be generated, and written to the console.<span style="color: #007700;"></Description></span>
<span style="color: #007700;"></Property></span>
<span style="color: #007700;"></ProductSection></span>
</pre></div>
<div><span style="background-color: white; color: #172b4d; font-size: 14px;"><span style="font-family: courier;"><br /></span></span></div><div><span style="background-color: white; color: #172b4d; font-size: 14px;"><span style="font-family: courier;"><br /></span></span></div><div>In <span face="-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen, Ubuntu, "Fira Sans", "Droid Sans", "Helvetica Neue", sans-serif" style="background-color: white; color: #172b4d; font-size: 14px;"><VirtualHardwareSection ovf:transport="iso"></span>, add the following:</div><div><br /></div><div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: #007700;"><vmw:ExtraConfig</span> <span style="color: #0000cc;">ovf:required=</span><span style="background-color: #fff0f0;">"false"</span> <span style="color: #0000cc;">vmw:key=</span><span style="background-color: #fff0f0;">"guestinfo.vmservice.defer-cloud-init"</span> <span style="color: #0000cc;">vmw:value=</span><span style="background-color: #fff0f0;">"ready"</span><span style="color: #007700;">/></span>
</pre></div>
<br /></div><div>Save the OVF file and export it to the content library. The name must by DNS compliant and must not contain any capital letters. </div><div><br /></div><div>Lastly, in the YAML manifest file add disable checks done by VM Operator:</div><div><br /></div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">metadata:
name: my-vm-name
labels:
app: db-server
annotations:
vmoperator.vmware.com/image-supported-check: disable
</pre></div>
<div><br /></div><div><p></p></div><div><br /></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-2267210865079078102021-08-10T17:16:00.006+03:002021-08-10T17:20:47.880+03:00vSphere with Tanzu - VM Operator <p>VM Operator is an extension to Kubernetes that implements VM management through Kubernetes. It was released officially at end of April 2021 with vCenter Server 7.0 U2a. This is a small feature pushed through a vCenter Server patch that is bringing a huge shift in the paradigm of VM management. It changes the way we are looking at VMs and at the way we are using virtualization. One could argue that Kubernetes already did that. I would say that unifying resource consumption through VMs and pods is actually a huge step forward. VM Operator brings to play not only Infrastructure as Code (IaC), but it also enables GitOps for VMs.</p><p>Let's look briefly at the two concepts. IaC represents the capability to define your infrastructure in a human readable language. A lot of tools exist that enable IaC - Puppet, Chef, Ansible, Terraform and so on. They are complex and powerful tools, some of them used in conjunction with others. All these tools have a particularity: they have their own language - Ruby, Python, HCL. GitOps expands the IaC concept. In this case, Git repository is the only source of truth. Manifests (configuration files that describe the resource to be provisioned) are pushed to a <span>G</span>it repository monitored by a continuous deployment (CD) tool that ensures that changes in the repository are applied in the real world. Kubernetes enables GitOps. Kubernetes manifests are written in YAML. With introduction of VM Operator the two concepts can be used in conjunction. For example you could have a GitOps pipeline that deploys the VMs using Kubernetes manifests and then configuration management tools could actually make sure the VMs are customized to suit their purpose - deploying an application server, monitoring agents and so on. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAZe5Olt3t-fxF7K4OVP-lKPHy8f-0rMaMzdqx_YILiU5ukXmmPFNf_muBvev6Xt1Lf6P5Rv8mPDPSSwvSMMU0DynadnE2eJjmzsyH0ID_cGh4EEd6USriS4RAH2iR-DQGNyhc3OEpLkE/s846/vm-operator-diagram.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="504" data-original-width="846" height="191" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAZe5Olt3t-fxF7K4OVP-lKPHy8f-0rMaMzdqx_YILiU5ukXmmPFNf_muBvev6Xt1Lf6P5Rv8mPDPSSwvSMMU0DynadnE2eJjmzsyH0ID_cGh4EEd6USriS4RAH2iR-DQGNyhc3OEpLkE/s320/vm-operator-diagram.png" width="320" /></a></div><br /><p>In the current post we will only look at the basics of deploying a VM through VM Operator. Once these concepts are clear then you can add other tools such as Git repositories, CD tools, configuration management. </p><p>So, what do we need to be able to provision a VM through VM Operator? </p><p>We need vCenter Server updated to U2a and a running Supervisor cluster. </p><p>At namespace level a storage policies needs to be configured. It is needed for both VM deployment and persistent volumes </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxCmFOdQvtrlwMPV2O3ZWaRdZo55t8m0yWZ8N6gMFlfUQYOsl6XlI3Z_-PShgJZ-7vhV6l6fbszrIq8QqBtLL0mQngdsUAH5RSTx-C2_v8eBvp06OcGX5r7-iuu9H9lVyxnhV6GA83_do/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="369" data-original-width="267" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxCmFOdQvtrlwMPV2O3ZWaRdZo55t8m0yWZ8N6gMFlfUQYOsl6XlI3Z_-PShgJZ-7vhV6l6fbszrIq8QqBtLL0mQngdsUAH5RSTx-C2_v8eBvp06OcGX5r7-iuu9H9lVyxnhV6GA83_do/" width="174" /></a></div><br /><br /><p></p><p>We need a content library uploaded with a supported VMware template (we will follow soon with a post on how to create unsupported VMware templates for VM operator). At the time of writing CentOS 8 and Ubuntu images are distributed through VMware Marketplace (https://marketplace.cloud.vmware.com/ search for "VM service Image")</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3y5j0EwN7znhhHc0C0GTsPGgSQozr3dg81TMt0TqzaLLyJHxaOPEO7lqQilZywOqpE-Iacgvrnr4Qcb1M_EFsbBX9IvHuE5IrpUcsPRIfwR6_HqCrNt7PhzmzICFmphNCUPOqBSV5gbU/s573/vm-service-image-centos-8.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="231" data-original-width="573" height="129" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3y5j0EwN7znhhHc0C0GTsPGgSQozr3dg81TMt0TqzaLLyJHxaOPEO7lqQilZywOqpE-Iacgvrnr4Qcb1M_EFsbBX9IvHuE5IrpUcsPRIfwR6_HqCrNt7PhzmzICFmphNCUPOqBSV5gbU/s320/vm-service-image-centos-8.png" width="320" /></a></div><p><br /></p><p>The images are installed with cloud-init and configured to transport user data using OVF environment variables to cloud-init process which in turn customizes the VM operating system. </p><p>In Workload Management, VM Service allows the configuration of additional VM classes and content libraries. VM classes and content library are assigned to the namespace to be able to provision the VMs. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinizJnCpMItEtXGdijKPUL9FLytA0bxx0Fd58sPzDbGJ4B1H4r-GW-nFknJ4SncyHhWoOPLCnY4zhabrjvga0u3vueZCv-vGoV0D5uYIob9YQhwfXGzgrdU_sc9yfqnLn9SwvnG06NLoA/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="361" data-original-width="252" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinizJnCpMItEtXGdijKPUL9FLytA0bxx0Fd58sPzDbGJ4B1H4r-GW-nFknJ4SncyHhWoOPLCnY4zhabrjvga0u3vueZCv-vGoV0D5uYIob9YQhwfXGzgrdU_sc9yfqnLn9SwvnG06NLoA/" width="168" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">VM classes selected for a particular namespace:</div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVrJumHu-1ZeAToFCkrzNd8-q6EAfkyu-Wn6OX9wnYkHiry_oBTqpgxvNsalyCxNlH_SaxJkNeGd27xGf49pI0LqJ3-9wVo3Gy7W0OpK4NH_IFEHlp7kheRF-ipSXEP2ZaDnMIJpiM7Cw/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="566" data-original-width="1115" height="162" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVrJumHu-1ZeAToFCkrzNd8-q6EAfkyu-Wn6OX9wnYkHiry_oBTqpgxvNsalyCxNlH_SaxJkNeGd27xGf49pI0LqJ3-9wVo3Gy7W0OpK4NH_IFEHlp7kheRF-ipSXEP2ZaDnMIJpiM7Cw/" width="320" /></a></div><p><br /></p>Content library selected for a particular namespace:<br /><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3u18nHHiEv72o7wBwirdqyzA83s_sLRKB5uG-uKSccS9ANtvmB7Zuy0EuyzedJzCBHTeF22v9eDG_D6iLTKipBcQ8ms_Vayg_gSmo3blcw3x0vERcA8EKB1YQYuYE9h3GJpTVtc7exyc/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="291" data-original-width="827" height="113" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3u18nHHiEv72o7wBwirdqyzA83s_sLRKB5uG-uKSccS9ANtvmB7Zuy0EuyzedJzCBHTeF22v9eDG_D6iLTKipBcQ8ms_Vayg_gSmo3blcw3x0vERcA8EKB1YQYuYE9h3GJpTVtc7exyc/" width="320" /></a></div><br />Once or the prerequisites are in place, connect to supervisor cluster, and select the namespace you want to deploy the VM<div><br /></div><div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">kubectl vsphere login --verbose 5 --server<span style="color: #333333;">=</span>https://192.168.2.1 --insecure-skip-tls-verify -u cloudadmin@my.lab
kubectl config use-context my-app-namespace
</pre></div>
</div><div><br /></div><div><br /></div><div>Check that the VM images in the content library are available </div><div><br /></div><div><div><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;">kubectl get virtualmachineimages
</pre></div></div><div><br /></div><div><div class="separator" style="clear: both; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyzac3X_m9bzpJe9JTTE7LknxP204-U1gS43FG99R19a26QOlQBqxEQM4HFf6Q2M9oOpxEwKSOiaLn70oWAO4B8giH5Dks8aOi-VRCtle9KK0wLRZR4rZhF18oUrEFj2oakm-4o6_z3Us/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="59" data-original-width="596" height="32" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyzac3X_m9bzpJe9JTTE7LknxP204-U1gS43FG99R19a26QOlQBqxEQM4HFf6Q2M9oOpxEwKSOiaLn70oWAO4B8giH5Dks8aOi-VRCtle9KK0wLRZR4rZhF18oUrEFj2oakm-4o6_z3Us/" width="320" /></a></div><br /></div>Create the VM manifest file - centos-db-2.yaml </div></div><div><br /></div><div>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachine
metadata:
name: centos-db-2
labels:
app: my-app-db
spec:
imageName: centos-stream-8-vmservice-v1alpha1.20210222.8
className: best-effort-xsmall
powerState: poweredOn
storageClass: tanzu-gold
networkInterfaces:
- networkType: nsx-t
vmMetadata:
configMapName: my-app-db-config
transport: OvfEnv
<span style="color: #0e84b5; font-weight: bold;">---</span>
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-db-config
data:
user-data: |
<span style="color: #003366; font-weight: bold;">I2Nsb3VkL6CiAgICBlbnMxNjA6CiAgICAgIGRoY3A0OiB0cnVlCg==</span>
hostname: centos-db-2
</pre></div>
<br /></div><div>In the manifest file we've added 2 resources:</div><div>- VirtualMachine: where we specify the VM template to use, the VM class. storage policy, network type and also how to send variables to the cloud-init inside the VM (using a config map resource to keep the date in Kubernetes and OVF environment variables to transport it to the VM)</div><div>- ConfigMap: contains in our case user data (Base64 encoded user data - this is a SSH key) and the hostname of the VM; Base64 output in this post is trunked </div><div><br /></div><div> To create the VM, apply the manifest. Then check its state.</div><div><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;">kubectl apply -f centos-db-2.yaml
</pre><pre style="line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;"><br /></pre><pre style="line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;">kubectl get virtualmachine</pre></div></div><div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTlsLrmCdR0QFC7zqo-iqeGLC5QUTVOjpH0lgRLttZNSUuUGQxKumFaQFNBpDrFLuJgFQL92KqQWQLZtPfyw66-nAEcaF2gxXUtW6ueTgZY_YwnoO9ynJf-G9Y6ymv7RcFTatiTgVz8Nc/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="99" data-original-width="558" height="57" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTlsLrmCdR0QFC7zqo-iqeGLC5QUTVOjpH0lgRLttZNSUuUGQxKumFaQFNBpDrFLuJgFQL92KqQWQLZtPfyw66-nAEcaF2gxXUtW6ueTgZY_YwnoO9ynJf-G9Y6ymv7RcFTatiTgVz8Nc/" width="320" /></a></div></div><div class="separator" style="clear: both; text-align: center;"><br /></div><br />Once the VM has been provisioned, it has been assigned an IP from the POD CIDR </div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7bNAo71geT0IOAeOglF_rT2CObelBMfDhbNGTHj3L17T0o3CdTML80zWaCflKljBP4FkZhxU5JN9q-HYGPXid9Jz1mzfLHEI5e-BlzR2WnhIQJ7csWqZe2tx0SiStpcl6s6TN0tfwIAY/s632/vm-pod-cidr-ip.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="594" data-original-width="632" height="301" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7bNAo71geT0IOAeOglF_rT2CObelBMfDhbNGTHj3L17T0o3CdTML80zWaCflKljBP4FkZhxU5JN9q-HYGPXid9Jz1mzfLHEI5e-BlzR2WnhIQJ7csWqZe2tx0SiStpcl6s6TN0tfwIAY/s320/vm-pod-cidr-ip.png" width="320" /></a></div><br /><div><br /></div><div><div class="separator" style="clear: both; text-align: center;"><br /></div>POD CIDRs are private subnets used for inter-pod communication. To access the VM, it needs an Ingress CIDR IP. This is a routable IP and it is implemented in NSX-T as a VIP on the load balancer. The Egress CIDR is used for communication from VM to outside world and it is implemented as SNAT rule. To define an ingress IP, we need to create a virtual machiner service resource of type load balancer:</div><div><br /></div><div><div>Create the manifest file - service-ssh-centos-db-2.yaml </div><div><br /></div><div><!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;">apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachineService
metadata:
name: lb-centos-db-2
spec:
selector:
app: my-app-db
type: LoadBalancer
ports:
- name: ssh
port: 22
protocol: TCP
targetPort: 22
</pre></div></div><div><br /></div></div><div>We are using the selector app: my-app-db to match the VM resource for this service. The service will be assigned an IP from Ingress network and it will forward all requests coming to that IP on SSH port to the VM IP on SSH port. </div><div><br /></div><div><div> To create the service, apply the manifest. Then check its state.</div><div><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;">kubectl apply -f service-ssh-centos-db-2.yaml
</pre><pre style="line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;"><br /></pre><pre style="line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;">kubectl get service lb-centos-db-2</pre></div></div></div><div><br /></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLBS8N0FlZdeGZMmhZxtbZ0xO9Q0fRiY4G2IWWGvZE6Hz9_K7vXabNYMdKf2Ye5dD6_VWtEZZOrJeM-mPoSlc9th_AvYi8_N_vnr6irQeMdzOpEAWTFo7Ce3vJKHWDteVrhV5ami06keM/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="60" data-original-width="818" height="23" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLBS8N0FlZdeGZMmhZxtbZ0xO9Q0fRiY4G2IWWGvZE6Hz9_K7vXabNYMdKf2Ye5dD6_VWtEZZOrJeM-mPoSlc9th_AvYi8_N_vnr6irQeMdzOpEAWTFo7Ce3vJKHWDteVrhV5ami06keM/" width="320" /></a></div><div><br /></div>The External IP displayed in the above listing is the ingress IP that you can use now to ssh to the VM:</div><div><div><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 16.25px; margin-bottom: 0px; margin-top: 0px;">ssh cloud-user@external_ip</pre></div></div><div><br /></div>Please note the user used to SSH. From it, you can then sudo and gain root privileges. </div><div><p>A VM provisioned via VM Operator can only be managed through the Supervisor cluster API (Kubernetes API). In this regard, the VM cannot be any longer managed directly from the UI or other management tools. Looking at the picture below you will notice that the VM is marked in UI as "Developer Managed" and that there are no actions that can be taken on the VM</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6HZ-ZEClbfqzUYKh4647XMvOFUGfxV0UuaYSvkcGQjrm4SUpCQbzHL5fLJp_X4P0y3h0NaJXNtW0MMl9Va5f2u2PVNllgJq4eheKxAraCMBMfEvqcrojImPQJp3tarsdw_N1c7pVobCk/s969/vm-provisioned-vm-operator.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="242" data-original-width="969" height="80" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6HZ-ZEClbfqzUYKh4647XMvOFUGfxV0UuaYSvkcGQjrm4SUpCQbzHL5fLJp_X4P0y3h0NaJXNtW0MMl9Va5f2u2PVNllgJq4eheKxAraCMBMfEvqcrojImPQJp3tarsdw_N1c7pVobCk/s320/vm-provisioned-vm-operator.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div>If you are this far, then well done, you've just provisioned you first VM using Kubernetes API. Now put those manifests in a git repo, install and configure a CD tool (such as ArgoCD) to monitor the repo and apply the manifests on the Supervisor cluster and you don't even need to touch kubectl command line or vCenter Server :-) </div><div><br /><p></p><div class="separator" style="clear: both; text-align: center;"><br /><br /></div><br /><br /><p></p></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-77446612877490124982021-05-13T20:13:00.003+03:002021-05-13T20:13:32.720+03:00vSphere with Tanzu - Enable Supervisor Cluster using PowerCLI<p>In <a href="https://www.sysadminstories.com/2021/05/vsphere-with-tanzu-enable-supervisor.html">previous post</a> we looked at how to manually enable Supervisor cluster on a vSphere cluster. Now we'll reproduce the same steps from GUI in a small script using PowerCLI. </p><p>PowerCLI 12.1.0 brought new cmdlets for VMware.VimAutomation.WorkloadManagement module and one of this is Enable-WMCluster. We will be using this cmdlet to enable Tanzu supervisor cluster. In the following example we'll be using NSX-T, but the <a href="https://developer.vmware.com/docs/powercli/latest/vmware.vimautomation.workloadmanagement/commands/enable-wmcluster/#NcpNetworking">cmdlet </a>can be used with distributed switches. </p><p>The following script is very simple .First we need to connect to vCenter Server and NSX manager</p><div style="background-color: #1e1e1e; color: #d4d4d4; font-family: Consolas, "Courier New", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div><span style="color: #dcdcaa;">Connect-VIServer</span> <span style="color: #9cdcfe;">-Server</span> <span style="color: #dcdcaa;">vc11.my.lab</span></div><div><span style="color: #dcdcaa;">Connect-NsxtServer</span> <span style="color: #9cdcfe;">-Server</span> <span style="color: #dcdcaa;">nsxt11.my.lab</span></div></div><p>Next we define the variables (all variable that were in the UI wizard).</p><p>The cluster where we enable Tanzu, the content library and the storage policies:</p><div style="background-color: #1e1e1e; color: #d4d4d4; font-family: Consolas, "Courier New", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div style="line-height: 19px;"><div><span style="color: #9cdcfe;">$vsphereCluster</span> = <span style="color: #dcdcaa;">Get-Cluster</span> <span style="color: #ce9178;">"MYCLUSTER"</span></div><div><span style="color: #9cdcfe;">$contentLibrary</span> = <span style="color: #ce9178;">"Tanzu subscribed"</span></div><div><span style="color: #9cdcfe;">$ephemeralStoragePolicy</span> = <span style="color: #ce9178;">"Tanzu gold"</span></div><div><span style="color: #9cdcfe;">$imageStoragePolicy</span> = <span style="color: #ce9178;">"Tanzu silver"</span></div><div><span style="color: #9cdcfe;">$masterStoragePolicy</span> = <span style="color: #ce9178;">"Tanzu gold"</span></div></div></div><p>Management network info for Supervisor Cluster VMs</p><div style="background-color: #1e1e1e; color: #d4d4d4; font-family: Consolas, "Courier New", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div><span style="color: #9cdcfe;">$mgmtNetwork</span> = <span style="color: #dcdcaa;">Get-VirtualNetwork</span> <span style="color: #ce9178;">"Mgmt-Network"</span></div><div><span style="color: #9cdcfe;">$mgmtNetworkMode</span> = <span style="color: #ce9178;">"StaticRange"</span></div><div><span style="color: #9cdcfe;">$mgtmNetworkStartIPAddress</span> = <span style="color: #ce9178;">"192.168.100.160"</span></div><div><span style="color: #9cdcfe;">$mgtmNetworkRangeSize</span> = <span style="color: #ce9178;">"5"</span></div><div><span style="color: #9cdcfe;">$mgtmNetworkGateway</span> = <span style="color: #ce9178;">"192.168.100.1"</span></div><div><span style="color: #9cdcfe;">$mgtmNetworkSubnet</span> = <span style="color: #ce9178;">"255.255.255.0"</span></div><div><div style="line-height: 19px;"><span style="color: #9cdcfe;">$distributedSwitch</span> = <span style="color: #dcdcaa;">Get-VDSwitch</span> <span style="color: #9cdcfe;">-Name</span> <span style="color: #ce9178;">"Distributed-Switch"</span></div></div></div><p>DNS and NTP servers</p><div style="background-color: #1e1e1e; color: #d4d4d4; font-family: Consolas, "Courier New", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div><span style="color: #9cdcfe;">$masterDnsSearchDomain</span> = <span style="color: #ce9178;">"my.lab"</span></div><div><span style="color: #9cdcfe;">$masterDnsServer</span> = <span style="color: #ce9178;">"192.168.100.2"</span></div><div><span style="color: #9cdcfe;">$masterNtpServer</span> = <span style="color: #ce9178;">"192.168.100.5"</span></div><div><span style="color: #9cdcfe;">$workerDnsServer</span> = <span style="color: #ce9178;">"192.168.100.2"</span></div></div><p>Tanzu details - size and external and internal IP subnets</p><div style="background-color: #1e1e1e; color: #d4d4d4; font-family: Consolas, "Courier New", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div><span style="color: #9cdcfe;">$size</span> = <span style="color: #ce9178;">"Tiny"</span> </div><div><span style="color: #9cdcfe;">$egressCIDR</span> = <span style="color: #ce9178;">"10.10.100.0/24"</span></div><div><span style="color: #9cdcfe;">$ingressCIDR</span> = <span style="color: #ce9178;">"10.10.200.0/24"</span></div><div><span style="color: #9cdcfe;">$serviceCIDR</span> = <span style="color: #ce9178;">"10.244.0.0/23"</span></div><div><span style="color: #9cdcfe;">$podCIDR</span> = <span style="color: #ce9178;">"10.96.0.0/23"</span></div></div><p>One more parameter needs to be provided: Edge cluster ID. For this we use NSX-T manager connectivity and </p><div style="background-color: #1e1e1e; color: #d4d4d4; font-family: Consolas, "Courier New", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div><span style="color: #9cdcfe;">$edgeClusterSvc</span> = <span style="color: #dcdcaa;">Get-NsxtService</span> <span style="color: #9cdcfe;">-Name</span> <span style="color: #dcdcaa;">com.vmware.nsx.edge_clusters</span></div><div><span style="color: #9cdcfe;">$results</span> = <span style="color: #9cdcfe;">$edgeClusterSvc</span><span style="color: #dcdcaa;">.</span><span style="color: #9cdcfe;">list</span>().<span style="color: #9cdcfe;">results</span></div><div><span style="color: #9cdcfe;">$edgeClusterId</span> = (<span style="color: #9cdcfe;">$results</span> | <span style="color: #dcdcaa;">Where</span> {<span style="color: #9cdcfe;">$_</span><span style="color: #dcdcaa;">.</span><span style="color: #9cdcfe;">display_name</span> -eq <span style="color: #ce9178;">"tanzu-edge-cluster"</span>}).<span style="color: #9cdcfe;">id</span></div></div><p>Last thing is to put all the parameters together in the cmdlet and run it against the vSphere cluster object</p><div style="background-color: #1e1e1e; color: #d4d4d4; font-family: Consolas, "Courier New", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div><span style="color: #9cdcfe;">$vsphereCluster</span> | <span style="color: #dcdcaa;">Enable-WMCluster</span> `</div><div><span style="color: #9cdcfe;">-SizeHint</span> <span style="color: #9cdcfe;">$size</span> `</div><div><span style="color: #9cdcfe;">-ManagementVirtualNetwork</span> <span style="color: #9cdcfe;">$mgmtNetwork</span> `</div><div><span style="color: #9cdcfe;">-ManagementNetworkMode</span> <span style="color: #9cdcfe;">$mgmtNetworkMode</span> `</div><div><span style="color: #9cdcfe;">-ManagementNetworkStartIPAddress</span> <span style="color: #9cdcfe;">$mgtmNetworkStartIPAddress</span> `</div><div><span style="color: #9cdcfe;">-ManagementNetworkAddressRangeSize</span> <span style="color: #9cdcfe;">$mgtmNetworkRangeSize</span> `</div><div><span style="color: #9cdcfe;">-ManagementNetworkGateway</span> <span style="color: #9cdcfe;">$mgtmNetworkGateway</span> `</div><div><span style="color: #9cdcfe;">-ManagementNetworkSubnetMask</span> <span style="color: #9cdcfe;">$mgtmNetworkSubnet</span> `</div><div><span style="color: #9cdcfe;">-MasterDnsServerIPAddress</span> <span style="color: #9cdcfe;">$masterDnsServer</span> `</div><div><span style="color: #9cdcfe;">-MasterNtpServer</span> <span style="color: #9cdcfe;">$masterNtpServer</span> `</div><div><span style="color: #9cdcfe;">-MasterDnsSearchDomain</span> <span style="color: #9cdcfe;">$masterDnsSearchDomain</span> `</div><div><span style="color: #9cdcfe;">-DistributedSwitch</span> <span style="color: #9cdcfe;">$distributedSwitch</span> `</div><div><span style="color: #9cdcfe;">-NsxEdgeClusterId</span> <span style="color: #9cdcfe;">$edgeClusterId</span> `</div><div><span style="color: #9cdcfe;">-ExternalEgressCIDRs</span> <span style="color: #9cdcfe;">$egressCIDR</span> `</div><div><span style="color: #9cdcfe;">-ExternalIngressCIDRs</span> <span style="color: #9cdcfe;">$ingressCIDR</span> `</div><div><span style="color: #9cdcfe;">-ServiceCIDR</span> <span style="color: #9cdcfe;">$serviceCIDR</span> `</div><div><span style="color: #9cdcfe;">-PodCIDRs</span> <span style="color: #9cdcfe;">$podCIDR</span> `</div><div><span style="color: #9cdcfe;">-WorkerDnsServer</span> <span style="color: #9cdcfe;">$workerDnsServer</span> `</div><div><span style="color: #9cdcfe;">-EphemeralStoragePolicy</span> <span style="color: #9cdcfe;">$ephemeralStoragePolicy</span> `</div><div><span style="color: #9cdcfe;">-ImageStoragePolicy</span> <span style="color: #9cdcfe;">$imageStoragePolicy</span> `</div><div><span style="color: #9cdcfe;">-MasterStoragePolicy</span> <span style="color: #9cdcfe;">$masterStoragePolicy</span> `</div><div><span style="color: #9cdcfe;">-ContentLibrary</span> <span style="color: #9cdcfe;">$contentLibrary</span></div><br /></div><p>And as simple as that, the cluster will be enabled (in a scripted and repeatable way). </p>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-64034186460365039392021-05-04T22:24:00.003+03:002021-05-04T22:24:31.086+03:00vSphere with Tanzu - Enable Supervisor Cluster<p>Before diving head first into how to enable supervisor cluster it's important to clarify a few aspects. There are several great posts (<a href="https://www.virtuallyghetto.com/2020/10/automating-workload-management-on-vsphere-with-tanzu.html" target="_blank">here</a> and <a href="https://www.virtuallyghetto.com/2020/11/complete-vsphere-with-tanzu-homelab-with-just-32gb-of-memory.html" target="_blank">here</a>) on how to deploy automatically Tanzu on vSphere. The reason I choose to present a step by step guide is because going through the manual steps helped me clarifying some aspects. I will not be covering the networking part. There are two ways of enabling Tanzu on vSphere - using <a href="https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-8D0E905F-9ABB-4CFB-A206-C027F847FAAC.html" target="_blank">NSX-T</a> or using vSphere networking and a <a href="https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-5673269F-C147-485B-8706-65E4A87EB7F0.html" target="_blank">load balancer</a>. </p><p>The Supervisor Cluster is a cluster enabled for vSphere with Tanzu. There is a one to one mapping between the Supervisor Cluster and the vSphere cluster. It is important because there features that are defined at Supervisor Cluster level only and inherited at Namespace level. A vSphere Namespace represents a set of resources where vSphere Pods, Tanzu Kubernetes clusters and VMs can run. It is similar to a resource pool in the sense that it brings together the compute and storage resources that can be consumed. A Supervisor Cluster can have many Namespaces, however at the time of writing there is a limit of 500 namespaces per vCenter Server. Depending on how you map namespaces to internal organizational units this can also be important. The high level architecture and components of a supervisor cluster can seen <a href="https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-3E4E6039-BD24-4C40-8575-5AA0EECBBBEC.html" target="_blank">here</a>. </p><p><b>Requirements</b></p><p></p><ul style="text-align: left;"><li>Configure NSX-T. Tanzu workloads need a T0 router configured on a edge cluster. All other objects (T1's, LB's, segments) are configured automatically during pod deployment. Edge recommended size is large, but it works with medium for lab deployments. Also for lab only, the edge cluster can run with a single edge node. Deploying and configuring NSX-T is not in the scope of this article</li></ul><ul style="text-align: left;"><li>vCenter Server level</li><ul><li><b>vSphere cluster</b> with DRS and HA enabled</li><li><b>content library</b> for Tanzu Kubernetes cluster images subscribed to https://wp-content.vmware.com/v2/latest/lib.json. In case you don't have Internet connectivity from vCenter Server you will need to download them offline and upload to the library. Check if you can have access to Internet via a proxy and you can add the proxy in vCenter Server VAMI interface (https://vcs_fqdn:5480) </li><li><b>storage policies</b> - for lab purpose one policy can be created and used for all types of storage. Go to Policies and Profiles and create a new VM Storage Profile - Enable host based rules and select Storage I/O Control</li></ul></ul><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1amV_FYkII5KDYVkL0dU-7rN7eI7glaypBinPbpVmBctFZ2H_1HutrFUXCeSwJyEK39y6kh2WVq9YSTOkfdBGkge6X5wk2PcNpj1lDCgED-akTUgHt_I27uhYQrbicu3D7UXPiVoIACU/s1155/tanzu-storage-policy.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="500" data-original-width="1155" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1amV_FYkII5KDYVkL0dU-7rN7eI7glaypBinPbpVmBctFZ2H_1HutrFUXCeSwJyEK39y6kh2WVq9YSTOkfdBGkge6X5wk2PcNpj1lDCgED-akTUgHt_I27uhYQrbicu3D7UXPiVoIACU/s320/tanzu-storage-policy.png" width="320" /></a></div><div style="text-align: center;"><br /></div><ul style="text-align: left;"><li>IP's - for ingress and egress traffic (routed), pod and service (internal traffic) </li></ul><ul style="text-align: left;"><li>latest version of vCenter Server - 7.0 U2a (required for some of the new functionalities - vm operator and namespace self service)</li></ul><ul style="text-align: left;"><li>NTP working and configured for vCenter Server and NSX-T manager (and the rest of components) </li></ul><div><br /></div><div>Enabling the Supervisor Cluster is pretty straight forward - go to workload management, clusters and add cluster. The wizard will take you through the following steps. </div><div><br /></div><div>First select vCenter Server and the type of networking. If you don't have NSX-T configured, then you can use vSphere Distributed Switch but first a load balancer needs to be installed (<a href="https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-5673269F-C147-485B-8706-65E4A87EB7F0.html" target="_blank">HAproxy </a>or AVI)</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk-x-veJ1pTxxFc8yISGAR4om2uN70BbXFsf0F3fP9aK19QGdCmyQlI90etz5Qq9d0h8MQ8mI2PJaBRU5eY147y87s6g_b18zbuZ6XVqLjZtsVO374vt3qBy6DFpMRv2FzuTtEUVcvK6M/s871/enable-workload-cluster-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="412" data-original-width="871" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk-x-veJ1pTxxFc8yISGAR4om2uN70BbXFsf0F3fP9aK19QGdCmyQlI90etz5Qq9d0h8MQ8mI2PJaBRU5eY147y87s6g_b18zbuZ6XVqLjZtsVO374vt3qBy6DFpMRv2FzuTtEUVcvK6M/s320/enable-workload-cluster-1.png" width="320" /></a></div><br /><div><br /></div><div>Then you select the vSphere cluster where to enable the Supervisor cluster. </div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiO0ztlJwEK87flY7Pj4qtOGlnbI6cbTdm9TeBH3REtQt04v5sZmKD2e-sB1Kr8AM2ZMQgfqGMQQwWCfCDh07u0xzZdxg6wAILjYQSrSsnMUk1wRNIIblaQgPv1I0XLkZNxW2ZRGTtsRGA/s1198/enable-workload-cluster-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="527" data-original-width="1198" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiO0ztlJwEK87flY7Pj4qtOGlnbI6cbTdm9TeBH3REtQt04v5sZmKD2e-sB1Kr8AM2ZMQgfqGMQQwWCfCDh07u0xzZdxg6wAILjYQSrSsnMUk1wRNIIblaQgPv1I0XLkZNxW2ZRGTtsRGA/s320/enable-workload-cluster-2.png" width="320" /></a></div><div><br /></div>Choose the size of the control plane VMs - the smaller they are the smaller the Kubernetes environment.<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqGvn3nKc7sgkEK2-1yk8gdg16LYQ4s8byXFJ74YmWVCjLu__9kuL7TwoXecr1jw9AKIea07MHsfkw7vbPZlc-nXrAsKVBxKYtrSEq_ZwlDikJxx4GeXq-0PhxFO5lElpxpdHMddOLD5U/s1172/enable-workload-cluster-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="281" data-original-width="1172" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqGvn3nKc7sgkEK2-1yk8gdg16LYQ4s8byXFJ74YmWVCjLu__9kuL7TwoXecr1jw9AKIea07MHsfkw7vbPZlc-nXrAsKVBxKYtrSEq_ZwlDikJxx4GeXq-0PhxFO5lElpxpdHMddOLD5U/s320/enable-workload-cluster-3.png" width="320" /></a></div><div><br /></div><br /><div>Map storage policies to types of storage in the Supervisor cluster<br /><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTURZ2CuKniJle2zikUwzOcJS6TezDHA7G_zxDzJwMvvdUS1oLsgjCuq2jpPT75AH1K5XaAC5eExzjTfwLJsCO4LxXGDRxNfmX5k5Kk3wH7VuNpe17rvQz55AEA_ObkjI-JzzNKlDG1BU/s1146/enable-workload-cluster-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="359" data-original-width="1146" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTURZ2CuKniJle2zikUwzOcJS6TezDHA7G_zxDzJwMvvdUS1oLsgjCuq2jpPT75AH1K5XaAC5eExzjTfwLJsCO4LxXGDRxNfmX5k5Kk3wH7VuNpe17rvQz55AEA_ObkjI-JzzNKlDG1BU/s320/enable-workload-cluster-4.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Add management network details. It is important to clarify that the supervisor VMs have 2 NIC's - one connected to vSphere distributed portgroup that has access to vCenter Server and NSX-T manager and another one connected to Kubernetes service network. Please check the "View Network Topology" in the step to have a clear picture of the configuration of the Supervisor VM. Also supervisor VMs need a range of 5 IPs free that will be use - in my case I am selecting a range from the management network. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEHU5W5zqnUs5zrdztDEwXcxGZO3s53tzru7Ft1bC3nHspeI9ea9T5JxAhuUorZX0dgNlgmzaCVyDHh4lX48PDdS8U31_Loy4g0qTAKR4HrI2Ec38nl4k7NQhMCdejwADJppuuG-foDnc/s1249/enable-workload-cluster-5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="641" data-original-width="1249" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEHU5W5zqnUs5zrdztDEwXcxGZO3s53tzru7Ft1bC3nHspeI9ea9T5JxAhuUorZX0dgNlgmzaCVyDHh4lX48PDdS8U31_Loy4g0qTAKR4HrI2Ec38nl4k7NQhMCdejwADJppuuG-foDnc/s320/enable-workload-cluster-5.png" width="320" /></a></div><div><br /></div>Next add the network details for ingress and egress networks and also for internal cluster networks (service and pod). Ingress and egress networks are used to access services inside the Kubernetes cluster via DNAT (ingress) and by internal services to access outside world via SNAT (egress). <div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBtserlhfnZAcbwe0d8hRAWw7dermJNOZBPfFazp_EpXk5BDGgwJUkzZdL-fUvz4AQ9grbGbQeuinMxknxw0ZP4nqIPKdDW9iQS9cx2WY4SxOor_sFhw2yHawWWoyNWgrEvSLpxjAGkrE/s1175/enable-workload-cluster-6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="464" data-original-width="1175" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBtserlhfnZAcbwe0d8hRAWw7dermJNOZBPfFazp_EpXk5BDGgwJUkzZdL-fUvz4AQ9grbGbQeuinMxknxw0ZP4nqIPKdDW9iQS9cx2WY4SxOor_sFhw2yHawWWoyNWgrEvSLpxjAGkrE/s320/enable-workload-cluster-6.png" width="320" /></a></div><br /><div>In case you use the same DNS server for management and service networks, the server must be reachable over both interfaces. Service network will use the IP of the egress network to reach DNS. </div><div><br /></div><div>Lastly, add the content library, review the configuration and give it a run. </div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKlV2j_p45hq0GqjmFlBuavg1NzE7IqZvQbG5GK1-GCPsiN6yG7ORg11Pdb59MTds6il5SJE2hXa_jaG2FUwOcJIXvMapQMRqRHh1HHWTDPcG6_1YqoVOTbHGrhSvBIFvHaeDf8AUQc7o/s1058/enable-workload-cluster-7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="178" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKlV2j_p45hq0GqjmFlBuavg1NzE7IqZvQbG5GK1-GCPsiN6yG7ORg11Pdb59MTds6il5SJE2hXa_jaG2FUwOcJIXvMapQMRqRHh1HHWTDPcG6_1YqoVOTbHGrhSvBIFvHaeDf8AUQc7o/s320/enable-workload-cluster-7.png" width="320" /></a></div><br /> Once the cluster is deployed successfully you will see it in the ready state:</div><div><br /></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt-SE9RGGUEvC-oSkb9KEkrQFbcTiFxew3SJ9nmG4iTy7oMDndtebTIK6bH1Jargbg9ean5jYe5MDIMCssbJXLm0_r6RX0AXHosriESaiy2scKSvlFDBhPXStXwTdNunjq9NnugV3_67k/s1285/workload-cluster-deployed.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="40" data-original-width="1285" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt-SE9RGGUEvC-oSkb9KEkrQFbcTiFxew3SJ9nmG4iTy7oMDndtebTIK6bH1Jargbg9ean5jYe5MDIMCssbJXLm0_r6RX0AXHosriESaiy2scKSvlFDBhPXStXwTdNunjq9NnugV3_67k/s320/workload-cluster-deployed.png" width="320" /></a></div><div><br /></div><div>You can now create namespaces and Kubernetes guest clusters. To access the cluster you will need to connect to https://cluster_ip and download kubectl vSphere plugin. </div><div><br /></div><div>Since we got through all the manual steps, we can look next at automating the configuration using PowerCLI in the next post.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><br /></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-15275063342536627262021-04-19T22:58:00.002+03:002021-04-19T22:58:38.036+03:00vRealize Automation 8.4 Disk Management <p>vRealize Automation 8.4 brings some enhancements to storage management at cloud template level. Since this a topic that I am particularly interested in, I've decided to take a look at the topic. I've focused on two cases cases:</p><p></p><ul style="text-align: left;"><li>cloud template with predefined number of disks</li><li>cloud template with dynamic number of disks </li></ul><p></p><div><b><br /></b></div><div><b>Cloud template with predefined number of disks</b></div><p>First I've created a template with 2 additional disks attached to it. Both disk are attached to SCSI controller 1 and their size is given as input. Both disk are thin provisioned. The template looks as following:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiowgvEBeNDlHJlPwC0japfTZg1FS3vHLu9ujbMVEB8Keh-7sbk5SaMCbmrYGg9myZdw4xAYbuM4bKMwkj2uWl9xi4qv1nxPNqKQUuh_HEkftgLAiXqEFRzHHEZExiLtwHm6KcKfdVOsWQ/s603/centos-vm-generic-template.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="193" data-original-width="603" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiowgvEBeNDlHJlPwC0japfTZg1FS3vHLu9ujbMVEB8Keh-7sbk5SaMCbmrYGg9myZdw4xAYbuM4bKMwkj2uWl9xi4qv1nxPNqKQUuh_HEkftgLAiXqEFRzHHEZExiLtwHm6KcKfdVOsWQ/s320/centos-vm-generic-template.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><p>Let's see the code behind the template. There are 2 main sections:</p><p></p><ul style="text-align: left;"><li><b>inputs</b>: where the input parameters are defined</li><li><b>resources</b>: where template resources are defined. </li></ul><div><b>Inputs </b>section contains parameters for VM image flavor (defaults to micro) and disk sizes (default to 5GB each)</div><div><br /></div><div><b>Resources</b> section has 3 resources - the VM (Cloud_Machine_1) and its 2 additional disks (Cloud_Volume_1 and Cloud_Volume_2). Each resource is defined by a type and properties. </div><div><br /></div><div>The disks are mapped to the VM resource using attachedDisks property. The input parameters can be seen under each resource, for example for disk capacity: ${input.flavor}, ${input.disk1Capacity} and ${input.disk2Capacity}. Please note that in this case the SCSI controller and the unit number are given in the template. </div><div><br /></div><div><div style="background-color: #1e1e1e; color: #d4d4d4; font-family: Consolas, "Courier New", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div><div style="line-height: 19px;"><div><span style="color: #569cd6;">formatVersion</span>: <span style="color: #b5cea8;">1</span></div><div><span style="color: #569cd6;">inputs</span>:</div><div> <span style="color: #569cd6;">flavor</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">string</span></div><div> <span style="color: #569cd6;">title</span>: <span style="color: #ce9178;">Flavor</span></div><div> <span style="color: #569cd6;">default</span>: <span style="color: #ce9178;">micro</span></div><div> <span style="color: #569cd6;">disk1Capacity</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">integer</span></div><div> <span style="color: #569cd6;">title</span>: <span style="color: #ce9178;">App Disk Capacity GB</span></div><div> <span style="color: #569cd6;">default</span>: <span style="color: #b5cea8;">5</span></div><div> <span style="color: #569cd6;">disk2Capacity</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">integer</span></div><div> <span style="color: #569cd6;">title</span>: <span style="color: #ce9178;">Log Disk Capacity GB</span></div><div> <span style="color: #569cd6;">default</span>: <span style="color: #b5cea8;">5</span></div><div><span style="color: #569cd6;">resources</span>:</div><div> <span style="color: #569cd6;">Cloud_Machine_1</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">Cloud.Machine</span></div><div> <span style="color: #569cd6;">properties</span>:</div><div> <span style="color: #569cd6;">image</span>: <span style="color: #ce9178;">CentOS7</span></div><div> <span style="color: #569cd6;">flavor</span>: <span style="color: #ce9178;">'${input.flavor}'</span></div><div> <span style="color: #569cd6;">constraints</span>:</div><div> - <span style="color: #569cd6;">tag</span>: <span style="color: #ce9178;">'vmw:az1'</span></div><div> <span style="color: #569cd6;">attachedDisks</span>:</div><div> - <span style="color: #569cd6;">source</span>: <span style="color: #ce9178;">'${resource.Cloud_Volume_1.id}'</span></div><div> - <span style="color: #569cd6;">source</span>: <span style="color: #ce9178;">'${resource.Cloud_Volume_2.id}'</span></div><div> <span style="color: #569cd6;">Cloud_Volume_1</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">Cloud.Volume</span></div><div> <span style="color: #569cd6;">properties</span>:</div><div> <span style="color: #569cd6;">SCSIController</span>: <span style="color: #ce9178;">SCSI_Controller_1</span></div><div> <span style="color: #569cd6;">provisioningType</span>: <span style="color: #ce9178;">thin</span></div><div> <span style="color: #569cd6;">capacityGb</span>: <span style="color: #ce9178;">'${input.disk1Capacity}'</span></div><div> <span style="color: #569cd6;">unitNumber</span>: <span style="color: #b5cea8;">0</span></div><div> <span style="color: #569cd6;">Cloud_Volume_2</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">Cloud.Volume</span></div><div> <span style="color: #569cd6;">properties</span>:</div><div> <span style="color: #569cd6;">SCSIController</span>: <span style="color: #ce9178;">SCSI_Controller_1</span></div><div> <span style="color: #569cd6;">provisioningType</span>: <span style="color: #ce9178;">thin</span></div><div> <span style="color: #569cd6;">capacityGb</span>: <span style="color: #ce9178;">'${input.disk2Capacity}'</span></div><div> <span style="color: #569cd6;">unitNumber</span>: <span style="color: #b5cea8;">1</span></div></div></div><br /></div></div><div><br /></div><div><br /></div><div>Once the template is created, you can run a test to see if all constraints are met and if code will run as expected. This is a useful feature and it is similar to unit tests used in development processes. </div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg07_cjfBRilawwAnfM7nkQuYQpvWNU7qEUCsYBQDMDGYvsiy3iQOTds5IzIT0g8ROxySKIm3QkUHp770zUjEIvfVey3DLmR8XBIBjBJ9ihpA2E5JPEahR6wUCWFMzDzWOBhYFr5KuxTmU/s532/test-cloud-template.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="250" data-original-width="532" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg07_cjfBRilawwAnfM7nkQuYQpvWNU7qEUCsYBQDMDGYvsiy3iQOTds5IzIT0g8ROxySKIm3QkUHp770zUjEIvfVey3DLmR8XBIBjBJ9ihpA2E5JPEahR6wUCWFMzDzWOBhYFr5KuxTmU/s320/test-cloud-template.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><br /><div>If tests are successful, you can deploy the template. After the resources are provisioned, you can select in the topology view any of the resources and check the details and the available day 2 actions in the right pane. </div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVmpFmgWuzdRbFYXfAOrSEdlsBpana3cYxSDI2ytygt8KRfAswEAhNkqdGuskc9ZC-gDSfEJzfH5txNOlGAVGyo_DVX7DeXJwfvuINGopA85mGjqDUtx0AZ6uFnEKWk1Ur4xADY9CLe3Y/s1124/vm-multiple-disk-deployment.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="241" data-original-width="1124" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVmpFmgWuzdRbFYXfAOrSEdlsBpana3cYxSDI2ytygt8KRfAswEAhNkqdGuskc9ZC-gDSfEJzfH5txNOlGAVGyo_DVX7DeXJwfvuINGopA85mGjqDUtx0AZ6uFnEKWk1Ur4xADY9CLe3Y/s320/vm-multiple-disk-deployment.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><br /><div>For the disks we can find out the resource name, its capacity, its state (if it is attached or not), if it is encrypted and to what machine it is associated.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnFdfMsuqTrEFMlOWG1bbEXjxArFZg2lCJ3zaWdD9aMI64gMOk_EY7U2ExSN697t4O-KC5Cy9umJyIOMx5mPJy5I9LQOu7iQIMNKNVEsYfWBT6UV_KpazPdqfHNXiOE3LK2ctky7UI4yQ/s515/disk-details.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="471" data-original-width="515" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnFdfMsuqTrEFMlOWG1bbEXjxArFZg2lCJ3zaWdD9aMI64gMOk_EY7U2ExSN697t4O-KC5Cy9umJyIOMx5mPJy5I9LQOu7iQIMNKNVEsYfWBT6UV_KpazPdqfHNXiOE3LK2ctky7UI4yQ/s320/disk-details.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><div>More details are displayed under custom properties such as the controller name, datastore on which the disk is placed and so on. A lot more details are displayed under custom properties such as the controller name, datastore on which the disk is placed and so on.</div><div><br /></div><div>We can resize the disks and also remove the disks from the machine (delete). You can see below a resize action where the existing value is displayed and the new value is typed:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwPID6JjJwj19CoD2gvNjCGpiWYFpe38Jsia2Bcyag2812igOVqM7S-b6VyMU81ylZHslJv3FrweNhDlGbxUW2duqPrPfHjp9ccQTd_wLbkcyDDj-O4OaDwY0QSUXpejtcKRohcu7pxFw/s562/resize-disk.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="280" data-original-width="562" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwPID6JjJwj19CoD2gvNjCGpiWYFpe38Jsia2Bcyag2812igOVqM7S-b6VyMU81ylZHslJv3FrweNhDlGbxUW2duqPrPfHjp9ccQTd_wLbkcyDDj-O4OaDwY0QSUXpejtcKRohcu7pxFw/s320/resize-disk.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><div><br style="text-align: left;" /></div></div><br /><div><b>Cloud template with dynamic number of disks </b></div><div><b><br /></b></div><div>The first example uses a predefined number of disks in the template even though the disk size is given as an input parameter. Another use case is to let the consumer specify how many disks he needs attached to the VM (obviously with some limitations). </div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgu2XsR20JNLOPULLbTTK_jlZcqj1sqGdntc048LhElDYekZ34Z3ZPwh8klEkRAgIlrTxeoc_gWvJ3vx2p9EQ0UKSPNjb4Mp0fWUIwpJq4gg7oryFeN29afWQIlPQccXFqZOlkiuJrrGWg/s417/template-dynamic-disks.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="190" data-original-width="417" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgu2XsR20JNLOPULLbTTK_jlZcqj1sqGdntc048LhElDYekZ34Z3ZPwh8klEkRAgIlrTxeoc_gWvJ3vx2p9EQ0UKSPNjb4Mp0fWUIwpJq4gg7oryFeN29afWQIlPQccXFqZOlkiuJrrGWg/s320/template-dynamic-disks.png" width="320" /></a></div><div><br /></div>In this case the code is looking a bit different. We define an array as the input for the disk sizes. The array is dynamic, but in our case limited to maximum 6 values (6 disks). This array is then used to define the Cloud.Volume resource. <div><br /><div><div style="background-color: #1e1e1e; color: #d4d4d4; font-family: Consolas, "Courier New", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div><div style="line-height: 19px;"><div><span style="color: #569cd6;">formatVersion</span>: <span style="color: #b5cea8;">1</span></div><div><span style="color: #569cd6;">inputs</span>:</div><div> <span style="color: #569cd6;">flavor</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">string</span></div><div> <span style="color: #569cd6;">title</span>: <span style="color: #ce9178;">Flavor</span></div><div> <span style="color: #569cd6;">default</span>: <span style="color: #ce9178;">micro</span></div><div> <span style="color: #569cd6;">disks</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">array</span></div><div> <span style="color: #569cd6;">minItems</span>: <span style="color: #b5cea8;">0</span></div><div> <span style="color: #569cd6;">maxItems</span>: <span style="color: #b5cea8;">6</span></div><div> <span style="color: #569cd6;">items</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">object</span></div><div> <span style="color: #569cd6;">properties</span>:</div><div> <span style="color: #569cd6;">size</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">integer</span></div><div> <span style="color: #569cd6;">title</span>: <span style="color: #ce9178;">Size (GB)</span></div><div> <span style="color: #569cd6;">minSize</span>: <span style="color: #b5cea8;">1</span></div><div> <span style="color: #569cd6;">maxSize</span>: <span style="color: #b5cea8;">50</span></div><div><span style="color: #569cd6;">resources</span>:</div><div> <span style="color: #569cd6;">Cloud_Machine_1</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">Cloud.Machine</span></div><div> <span style="color: #569cd6;">properties</span>:</div><div> <span style="color: #569cd6;">image</span>: <span style="color: #ce9178;">CentOS7</span></div><div> <span style="color: #569cd6;">flavor</span>: <span style="color: #ce9178;">'${input.flavor}'</span></div><div> <span style="color: #569cd6;">constraints</span>:</div><div> - <span style="color: #569cd6;">tag</span>: <span style="color: #ce9178;">'vmw:az1'</span></div><div> <span style="color: #569cd6;">attachedDisks</span>: <span style="color: #ce9178;">'${map_to_object(resource.disk[*].id, "source")}'</span></div><div> <span style="color: #569cd6;">disk</span>:</div><div> <span style="color: #569cd6;">type</span>: <span style="color: #ce9178;">Cloud.Volume</span></div><div> <span style="color: #569cd6;">allocatePerInstance</span>: <span style="color: #569cd6;">true</span></div><div> <span style="color: #569cd6;">properties</span>:</div><div> <span style="color: #569cd6;">provisioningType</span>: <span style="color: #ce9178;">thin</span></div><div> <span style="color: #569cd6;">capacityGb</span>: <span style="color: #ce9178;">'${input.disks[count.index].size}'</span></div><div> <span style="color: #569cd6;">count</span>: <span style="color: #ce9178;">'${length(input.disks)}'</span></div><br /></div></div><div><span style="color: #ce9178;"><br /></span></div></div><div><br /></div><div>When requesting the deployment, an user can leave the default disk in the VM image or add up to 6 more disks</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFjaDL30LZ7YNt3CLzr_iy4v0SMhkiZQbVd_C4x0C8bScYFpakL7lJsBCw1XBwXnZX7faNpiaW00OSIMMslwzfQSE4fNeEsSoPIJGUf5uzPuTsvF2EpR9L43sX-2dyzvJhabwCXI2oTUI/s805/deployment-page-dynamic-disks-template.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="411" data-original-width="805" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFjaDL30LZ7YNt3CLzr_iy4v0SMhkiZQbVd_C4x0C8bScYFpakL7lJsBCw1XBwXnZX7faNpiaW00OSIMMslwzfQSE4fNeEsSoPIJGUf5uzPuTsvF2EpR9L43sX-2dyzvJhabwCXI2oTUI/s320/deployment-page-dynamic-disks-template.png" width="320" /></a></div><br /><div><br /></div><div>Details about the disks and controllers can be seen directly from vRA. In the example below all disks are placed on the same controller:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh206pvTOUwqpd60l-r6qFL2NPWPQEBPbHHDtcm2m1M_jaVnnKsEw3ZTv6gBSxRFCvjM8LY4csNNVPhmFTZmU7Y3nZka2pf1lwDUOFAE4wah3ypciAIReKSxe18TltXXRqYB8v3VNnYBZw/s814/vra-view-disks.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="409" data-original-width="814" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh206pvTOUwqpd60l-r6qFL2NPWPQEBPbHHDtcm2m1M_jaVnnKsEw3ZTv6gBSxRFCvjM8LY4csNNVPhmFTZmU7Y3nZka2pf1lwDUOFAE4wah3ypciAIReKSxe18TltXXRqYB8v3VNnYBZw/s320/vra-view-disks.png" width="320" /></a></div><br /><div><br /></div><div><br /></div><div><b>Caveats</b></div><div><br /></div><div>When adding same size disks an error is displayed about "data provided already entered". Not clear at this time if it is my code or it is a limitation.</div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_2QXumgxT-KFOdqKmx9g6c-cGgS-BPNvZVz7PsuRXv4R1CvgqWvYFIMQcNQzmAkAqj9NzzV1SI7RoBM8JzQEyz22PWsgkmW5TXT6JByBzR0yoDIIM1tmxTsNifRXUpbc3YGCZQh1M4kA/s534/error-dynamic-disks-same-size.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="172" data-original-width="534" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_2QXumgxT-KFOdqKmx9g6c-cGgS-BPNvZVz7PsuRXv4R1CvgqWvYFIMQcNQzmAkAqj9NzzV1SI7RoBM8JzQEyz22PWsgkmW5TXT6JByBzR0yoDIIM1tmxTsNifRXUpbc3YGCZQh1M4kA/s320/error-dynamic-disks-same-size.png" width="320" /></a></div><br /><div><br /></div></div><div>The controller type is automatically taken from the VM template (image). Being able to actually specify the controller type or even change it as a day 2 operation would be also helpful. </div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><p></p></div></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com2tag:blogger.com,1999:blog-555576322574773333.post-8199716678840233272021-04-18T22:24:00.001+03:002021-04-19T23:00:20.783+03:00What's new in vRealize Automation 8.4<p> Last Friday vRealize Automation 8.4 was released and we are going to take a look at some of the new features. </p><p><b>vRA vRO Plugin</b></p><p>The vRO plugin for vRA is back and it seems it is here to stay for good. This is one of the long waited come backs. There are several phases of development for the plugin and what we get now is phase 1 functionalities:</p><p></p><ul style="text-align: left;"><li>management of vRA on-premises and vRA Cloud hosts</li><li>preserver authentication to the hosts and dynamic host creation</li><li>REST client available allowing requests to vRA</li></ul><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ48ImMSC1ZeVI049zaJY-k4OBduOMrWzGrju1SWgK-80oDUI5HaTbo6QvDsNjDXIEGheMr-UNU_3s8mp-qApUal6NoosKIo9axzGj2lalyJeUxDGMg5rUYDl0lK5RJl19jwo05UhxbyA/s1173/vra_vro_workflows.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="607" data-original-width="1173" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ48ImMSC1ZeVI049zaJY-k4OBduOMrWzGrju1SWgK-80oDUI5HaTbo6QvDsNjDXIEGheMr-UNU_3s8mp-qApUal6NoosKIo9axzGj2lalyJeUxDGMg5rUYDl0lK5RJl19jwo05UhxbyA/s320/vra_vro_workflows.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqlqCISmADuJZ1wu9YEcQRmWKyI4b5C2SYRrdQS-fIWGV5VnY_vd9dI8HtzAB4JU2b7j6P0nlU3P-Gaz1KNCG1USd_PFmoLOieBPI921uku_At6yYj-KTc25RXFyY_uSjJ-G7X2BWnkws/s483/vra_vro_plugin.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="345" data-original-width="483" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqlqCISmADuJZ1wu9YEcQRmWKyI4b5C2SYRrdQS-fIWGV5VnY_vd9dI8HtzAB4JU2b7j6P0nlU3P-Gaz1KNCG1USd_PFmoLOieBPI921uku_At6yYj-KTc25RXFyY_uSjJ-G7X2BWnkws/s320/vra_vro_plugin.png" width="320" /></a></div><div><br /></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><div style="text-align: left;">The plugin is supported in vRA 8.3, but it has to be downloaded and installed manually. There seems to be a plan for VRO especially if we look back at support added for other languages such as Node.js, Python and PowerShell. </div></div><br /><div><br /></div><div><b>Storage Enhancements</b></div><div><b><br /></b></div><div>At storage level there are new features that improve visibility and management:</div><div><ul style="text-align: left;"><li>specify order in which the disks are created </li><li>choose SCSI controller to which the disk is connected </li><li>day 2 actions on the disks part of image template</li></ul><div><br /></div></div><div>Deploy multiple disks blueprint:</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgfZv4Mtne0nACzFt7FRPZ7-NmaUYuKNxm1vEt14vOgnAvuzF2mVRJgZqOn-Inws0KhWPUWIkjMY_dggmRtbWEd1tax5QFxd5s83p3lSoQUrr5ow4vYieZMeObl3wNyAtbFLmY_LhRLiM/s616/multiple-disks.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="187" data-original-width="616" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgfZv4Mtne0nACzFt7FRPZ7-NmaUYuKNxm1vEt14vOgnAvuzF2mVRJgZqOn-Inws0KhWPUWIkjMY_dggmRtbWEd1tax5QFxd5s83p3lSoQUrr5ow4vYieZMeObl3wNyAtbFLmY_LhRLiM/s320/multiple-disks.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifDuGbKyv3WIJrSUJr-_9S7w6wlXBXzu0CzcHhAgFT9ec9EEGdH0FG3xT8oAFrPiGKjJKC6byGCEgplcqtq0QL6Vcs3ZFZKXcJhxgCSXo-jy3Nro3lW4wIkXAfsLzbPFdGV9ZXN1PX74A/s425/deploy-vm-multiple-disks.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="280" data-original-width="425" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifDuGbKyv3WIJrSUJr-_9S7w6wlXBXzu0CzcHhAgFT9ec9EEGdH0FG3xT8oAFrPiGKjJKC6byGCEgplcqtq0QL6Vcs3ZFZKXcJhxgCSXo-jy3Nro3lW4wIkXAfsLzbPFdGV9ZXN1PX74A/s320/deploy-vm-multiple-disks.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">A more detailed article about disk management can be found <a href="https://www.sysadminstories.com/2021/04/vrealize-automation-84-disk-management.html" target="_blank">here </a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div><b>Azure Provisioning Enhancements</b></div><div><b><br /></b></div><div>A series of new features is available for Azure integration:</div><div><ul style="text-align: left;"><li>support for Azure shared images </li><li>Azure disk encryption set - encrypt VMs and attached disks and support 3rd party KMS </li><li>Azure disk snapshot - create and mange disk snapshots with Azure deployments</li></ul></div><div><br /></div><div><b>ITSM Integration with ServiceNow Enhancements </b></div><div><br /></div><div>Foo those of you using ServiceNow as portal new new enhancements are brought for the integration with vRA. </div><div><div><ul style="text-align: left;"><li>Support for Catalog Items which has Custom Resource (without for vRO Objects)</li><li>Support for Catalog Items with Custom Day 2 actions</li><li>Ability to customize vRA Catalog by adding Edit Box and Drop down in ServiceNow.</li><li>Ability to add to attach a script to these fields.</li><li>Deployment Details on available in ServicePortal</li></ul></div><div style="font-weight: bold;"><span style="font-weight: 400;">If you are using on-premises ServiceNow the integration this is not yet validated (seems it's on the way though).</span></div></div><div style="font-weight: bold;"><span style="font-weight: 400;"><br /></span></div><div><b>Enhancements to Configuration Management Tools</b></div><div><b><br /></b></div><div>The configuration management eco-system supported with vRA also got its enhancements (Puppet, SaltStack, Ansible)</div><div><br /></div><div>This was just a short overview of the new features brought in by vRA 8.4. The full list can be read in the<a href="https://docs.vmware.com/en/vRealize-Automation/8.4/rn/vRealize-Automation-84-releasenotes.html#whatsnew" target="_blank"> release notes.</a></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-1313571185834383072021-03-01T20:35:00.002+02:002021-03-02T09:30:40.974+02:00Deploy VCSA Appliance with Terraform<p>I am back to an older project involving VMware products and Terraform. For those of you new to the subject, Terraform is an open source infrastructure as code tool developed by HashiCorp. It allows to define the entire infrastructure in a language called HashiCorp Configuration Language (HCL) and JSON files (where HCL is not enough). </p><p>The interest for Terraform is its ability to easily deliver infrastructure across different infrastructures: public cloud, private cloud, Kubernetes. You write your configuration files, test it (with plan) and then you apply it to the infrastructure to get your resources deployed. There are other software tools that can be used such as HashiCorp Vault which is a secret management solution that can be consumed programmatically. In my example I will be using Vault to store the passwords required for setting up VCSA. </p><p>In this example we will use Terraform to update the VCSA JSON template with values provided in a variable file and then run the VCSA cli installer. So we are not using the vSphere provider, rather local provider for modifying the template file and null provider to run a local command. I chose this example though because it is something I struggled to get it working. </p><p>I've used the following simple project structure:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJDaso7I1S4cJmiSWJ7gjyOoH512mfQSl7JEtpLbb-w3IqqElSLpkrMkbkmIh2O9ofOKRNjCobvOu39RA73frUoHLhZdHzaEcJO6ViDcripA_OGG8VehvhOFuTDtFdfWHvyea1oOxZpns/s221/TF+structure.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="147" data-original-width="221" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJDaso7I1S4cJmiSWJ7gjyOoH512mfQSl7JEtpLbb-w3IqqElSLpkrMkbkmIh2O9ofOKRNjCobvOu39RA73frUoHLhZdHzaEcJO6ViDcripA_OGG8VehvhOFuTDtFdfWHvyea1oOxZpns/s0/TF+structure.png" /></a></div><div class="separator" style="clear: both; text-align: left;">Templates folder contains VCSA modified template. Although all .tf files could be made into one (main.tf), I prefer this way of making the code more readable (and yes, variables.tf has variables and vault.tf has the Vault provider definition and the keys to secrets)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><b>main.tf </b>defines 2 resources: update a template file and a command to execute </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div style="background: rgb(0, 0, 0); border-width: 0em 0em 0em 0em; border: 0em solid gray; overflow: auto; padding: 0em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: #cccccc;">resource</span> <span style="color: #cd0000;">"local_file"</span> <span style="color: #cd0000;">"vcsa_json"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">content</span> <span style="color: #cccccc;">=</span> <span style="color: #cccccc;">templatefile</span> <span style="color: #cccccc;">(</span>
<span style="color: #00cd00;">var</span><span style="color: #cccccc;">.template_file_path,</span>
<span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">vc_fqdn</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.vcenterserver,</span>
<span style="color: #cccccc;">vc_user</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.vcenterserver_user</span>
<span style="color: #cccccc;">vc_user_pass</span> <span style="color: #cccccc;">=</span> <span style="color: #cccccc;">data.vault_generic_secret.vcenter_auth.data[</span><span style="color: #cd0000;">"value"</span><span style="color: #cccccc;">],</span>
<span style="color: #cccccc;">vm_network</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.pg_mgmt,</span>
<span style="color: #cccccc;">vdc</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.vdc,</span>
<span style="color: #cccccc;">datastore</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.datastore,</span>
<span style="color: #cccccc;">host</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.host,</span>
<span style="color: #cccccc;">cluster</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.cluster,</span>
<span style="color: #cccccc;">vcsa_name</span> <span style="color: #cccccc;">=</span> <span style="color: #cccccc;">element(split(</span><span style="color: #cd0000;">"."</span><span style="color: #cccccc;">,</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.vcsa_fqdn),</span><span style="color: #cd00cd;">0</span><span style="color: #cccccc;">),</span>
<span style="color: #cccccc;">vcsa_fqdn</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.vcsa_fqdn,</span>
<span style="color: #cccccc;">vcsa_ip</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.vcsa_ip,</span>
<span style="color: #cccccc;">prefix</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.prefix,</span>
<span style="color: #cccccc;">gateway</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.gateway,</span>
<span style="color: #cccccc;">dns</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.dns,</span>
<span style="color: #cccccc;">vcsa_root_pass</span> <span style="color: #cccccc;">=</span> <span style="color: #cccccc;">data.vault_generic_secret.vcsa_root.data[</span><span style="color: #cd0000;">"value"</span><span style="color: #cccccc;">],</span>
<span style="color: #cccccc;">ntp_servers</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.ntp,</span>
<span style="color: #cccccc;">sso_password</span> <span style="color: #cccccc;">=</span> <span style="color: #cccccc;">data.vault_generic_secret.vcsa_admin.data[</span><span style="color: #cd0000;">"value"</span><span style="color: #cccccc;">]</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">)</span>
<span style="color: #cccccc;">filename</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">var</span><span style="color: #cccccc;">.config_file_path</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">resource</span> <span style="color: #cd0000;">"null_resource"</span> <span style="color: #cd0000;">"vcsa_install"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">provisioner</span> <span style="color: #cd0000;">"local-exec"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">command</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"${var.installcmd_file_path}/vcsa-deploy install --accept-eula </span>
<span style="color: #cd0000;"> --acknowledge-ceip --no-esx-ssl-verify ${var.config_file_path}"</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">}</span>
</pre></div>
<div><br /></div><div><br /></div>Local_file resource takes the template given by template_file_path variable and creates a configuration file at the path given in config_file_path variable. Null_resource executes a local command, in this case vcsa-deploy command to which we input updated configuration file. <div><br /></div><div>Within the template file you can see references to variables from variables.tf (var.something) and also to data from vault.tf (data.vault_generic_secret.some_path). Let's look at the the two files.</div><div><br /></div><div><b>variables.tf </b></div><div><br /></div><div><b>
<!--HTML generated using hilite.me--><div style="background: rgb(0, 0, 0); border-width: 0em 0em 0em 0em; border: 0em solid gray; overflow: auto; padding: 0em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: #cccccc;">variable</span> <span style="color: #cd0000;">"template_file_path"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">description</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"JSON template file path"</span>
<span style="color: #00cd00;">type</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">string</span>
<span style="color: #cdcd00;">default</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"templates/vcsa70_embedded_vCSA_on_VC.json"</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">variable</span> <span style="color: #cd0000;">"config_file_path"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">description</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"vcsa configuration JSON file path"</span>
<span style="color: #00cd00;">type</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">string</span>
<span style="color: #cdcd00;">default</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"/data/build/vcsa01_embedded_vCSA_on_VC.json"</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">variable</span> <span style="color: #cd0000;">"installcmd_file_path"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">description</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"command line file path"</span>
<span style="color: #00cd00;">type</span> <span style="color: #cccccc;">=</span> <span style="color: #00cd00;">string</span>
<span style="color: #cdcd00;">default</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"/data/VMware-VCSA-all-7.0.1-17491101/vcsa-cli-installer/lin64"</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">variable</span> <span style="color: #cd0000;">"vcsa_fqdn"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">description</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"vcsa hostname"</span>
<span style="color: #cdcd00;">default</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"vcsa01.mylab.local"</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">variable</span> <span style="color: #cd0000;">"vcsa_ip"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">description</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"vcsa ip address"</span>
<span style="color: #cdcd00;">default</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"192.168.1.10"</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">variable</span> <span style="color: #cd0000;">"prefix"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">description</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"IP prefix"</span>
<span style="color: #cdcd00;">default</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"24"</span>
<span style="color: #cccccc;">}</span>
</pre></div>
</b></div><div><b><br /></b></div><div>Each variable is defined by a name and a value. It can also have a description and a type (Please note that not all variables have been posted in this listing) </div><div><b><br /></b></div><div><p></p><p><b>vault.tf</b></p><p>
<!--HTML generated using hilite.me--></p><div style="background: rgb(0, 0, 0); border-width: 0em 0em 0em 0em; border: 0em solid gray; overflow: auto; padding: 0em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: #cccccc;">provider</span> <span style="color: #cd0000;">"vault"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">address</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"https://192.168.1.2:8200"</span>
<span style="color: #cccccc;">token</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"ABCD"</span>
<span style="color: #cccccc;">skip_tls_verify</span> <span style="color: #cccccc;">=</span> <span style="color: #cdcd00;">true</span>
<span style="color: #cccccc;">}</span>
<span style="border: 1px solid rgb(255, 0, 0); color: #cccccc;">#</span> <span style="color: #cccccc;">vcsa</span> <span style="color: #cccccc;">deploy</span>
<span style="color: #cccccc;">data</span> <span style="color: #cd0000;">"vault_generic_secret"</span> <span style="color: #cd0000;">"vcsa_admin"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">path</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"kv-vmware-stgdev/administrator@vsphere.local"</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">data</span> <span style="color: #cd0000;">"vault_generic_secret"</span> <span style="color: #cd0000;">"vcsa_root"</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">path</span> <span style="color: #cccccc;">=</span> <span style="color: #cd0000;">"kv-vmware-stgdev/root"</span>
<span style="color: #cccccc;">}</span>
</pre></div>
<br /><p></p></div><div>The file contains the Vault provider definition and two keys for the VCSA admin and root passwords. </div><div><br /></div><div><br /></div><div><b>template file (vcsa70_embedded_vCSA_on_VC.json) </b></div><div><b><br /></b></div><div>The values from variables.tf and vault.tf are updated in the template. To be able to update the default template, you need to modify it first by adding keys that can be interpreted by Terraform provider. In my case I took the VCSA 7.0 embedded template and changed it as following:</div><div><br /></div><div>
<!--HTML generated using hilite.me--><div style="background: rgb(0, 0, 0); border-width: 0em 0em 0em 0em; border: 0em solid gray; overflow: auto; padding: 0em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">"__version":</span> <span style="color: #cd0000;">"2.13.0"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"__comments":</span> <span style="color: #cd0000;">"Sample template to deploy a vCenter Server Appliance with an embedded Platform Services Controller on a vCenter Server instance."</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"new_vcsa":</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">"vc":</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">"__comments":</span> <span style="color: #cccccc;">[</span>
<span style="color: #cd0000;">"'datacenter' must end with a datacenter name, and only with a datacenter name. "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"'target' must end with an ESXi hostname, a cluster name, or a resource pool name. "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"The item 'Resources' must precede the resource pool name. "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"All names are case-sensitive. "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"For details and examples, refer to template help, i.e. vcsa-deploy {install|upgrade|migrate} --template-help"</span>
<span style="color: #cccccc;">],</span>
<span style="color: #cccccc;">"hostname":</span> <span style="color: #cd0000;">"${vc_fqdn}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"username":</span> <span style="color: #cd0000;">"${vc_user}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"password":</span> <span style="color: #cd0000;">"${vc_user_pass}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"deployment_network":</span> <span style="color: #cd0000;">"${vm_network}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"datacenter":</span> <span style="color: #cccccc;">[</span>
<span style="color: #cd0000;">"${vdc}"</span>
<span style="color: #cccccc;">],</span>
<span style="color: #cccccc;">"datastore":</span> <span style="color: #cd0000;">"${datastore}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"target":</span> <span style="color: #cccccc;">[</span>
<span style="color: #cd0000;">"${cluster}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"${host}"</span>
<span style="color: #cccccc;">]</span>
<span style="color: #cccccc;">},</span>
<span style="color: #cccccc;">"appliance":</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">"__comments":</span> <span style="color: #cccccc;">[</span>
<span style="color: #cd0000;">"You must provide the 'deployment_option' key with a value, which will affect the vCenter Server Appliance's configuration parameters, such as the vCenter Server Appliance's number of vCPUs, the memory size, the storage size, and the maximum numbers of ESXi hosts and VMs which can be managed. For a list of acceptable values, run the supported deployment sizes help, i.e. vcsa-deploy --supported-deployment-sizes"</span>
<span style="color: #cccccc;">],</span>
<span style="color: #cccccc;">"thin_disk_mode":</span> <span style="color: #cdcd00;">true</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"deployment_option":</span> <span style="color: #cd0000;">"small"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"name":</span> <span style="color: #cd0000;">"${vcsa_name}"</span>
<span style="color: #cccccc;">},</span>
<span style="color: #cccccc;">"network":</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">"ip_family":</span> <span style="color: #cd0000;">"ipv4"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"mode":</span> <span style="color: #cd0000;">"static"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"system_name":</span> <span style="color: #cd0000;">"${vcsa_fqdn}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"ip":</span> <span style="color: #cd0000;">"${vcsa_ip}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"prefix":</span> <span style="color: #cd0000;">"${prefix}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"gateway":</span> <span style="color: #cd0000;">"${gateway}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"dns_servers":</span> <span style="color: #cccccc;">[</span>
<span style="color: #cd0000;">"${dns}"</span>
<span style="color: #cccccc;">]</span>
<span style="color: #cccccc;">},</span>
<span style="color: #cccccc;">"os":</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">"password":</span> <span style="color: #cd0000;">"${vcsa_root_pass}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"ntp_servers":</span> <span style="color: #cd0000;">"${ntp_servers}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"ssh_enable":</span> <span style="color: #cdcd00;">false</span>
<span style="color: #cccccc;">},</span>
<span style="color: #cccccc;">"sso":</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">"password":</span> <span style="color: #cd0000;">"${sso_password}"</span><span style="color: #cccccc;">,</span>
<span style="color: #cccccc;">"domain_name":</span> <span style="color: #cd0000;">"vsphere.local"</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">},</span>
<span style="color: #cccccc;">"ceip":</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">"description":</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">"__comments":</span> <span style="color: #cccccc;">[</span>
<span style="color: #cd0000;">"++++VMware Customer Experience Improvement Program (CEIP)++++"</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"VMware's Customer Experience Improvement Program (CEIP) "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"provides VMware with information that enables VMware to "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"improve its products and services, to fix problems, "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"and to advise you on how best to deploy and use our "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"products. As part of CEIP, VMware collects technical "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"information about your organization's use of VMware "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"products and services on a regular basis in association "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"with your organization's VMware license key(s). This "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"information does not personally identify any individual. "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">""</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"Additional information regarding the data collected "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"through CEIP and the purposes for which it is used by "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"VMware is set forth in the Trust & Assurance Center at "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"http://www.vmware.com/trustvmware/ceip.html . If you "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"prefer not to participate in VMware's CEIP for this "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"product, you should disable CEIP by setting "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"'ceip_enabled': false. You may join or leave VMware's "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"CEIP for this product at any time. Please confirm your "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"acknowledgement by passing in the parameter "</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"--acknowledge-ceip in the command line."</span><span style="color: #cccccc;">,</span>
<span style="color: #cd0000;">"++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"</span>
<span style="color: #cccccc;">]</span>
<span style="color: #cccccc;">},</span>
<span style="color: #cccccc;">"settings":</span> <span style="color: #cccccc;">{</span>
<span style="color: #cccccc;">"ceip_enabled":</span> <span style="color: #cdcd00;">false</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">}</span>
<span style="color: #cccccc;">}</span>
</pre></div>
<br /></div><div>If you look at main.tf resource definition you will see the same keys from JSON file between {}.</div><div><br /></div><div>Now all the code is written down and it's a simple matter of running terraform plan and terraform apply. </div><div><br /></div><div><br /></div>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-65477160349591504942020-12-10T04:00:00.001+02:002020-12-10T04:00:07.448+02:00NSX Load Balancer - Redirecting Traffic to Maintenance Page <p>In this post we'll look at 2 situations when we need to redirect vRealize Automation traffic to a maintenance page. The type of traffic doesn't really matter as long as the traffic goes through the load balancer, but for a less abstract post we'll use vRA 7.x. The use cases are: </p><p></p><ul style="text-align: left;"><li>vRA services are down (for example IaaS manager pool is gone) - in this case it would help if traffic is redirected from the vRA server login portal to a "sorry server"</li><li>scheduled maintenance window (for patching) - you need vRA working normally, but you don't want anyone else to login and start playing around </li></ul><p></p><p>For both cases we'll be using simple application rules in NSX load balancer (well, if the services are actually behind a NSX load balancer). In a highly available architecture, every service in VRA will be behind a load balancer. For simplicity we'll look only at VRA appliances as for the rest it can be easily extrapolated. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMbPInRyw5Hgc7N1uCnWempMmE56NvtcDoAUhJYTe79mCNoEOW9BAKvTIxpgAVTPlJO7t475vgolcQd4uKFy8tOHmskce4oCwW-ypqjptNcZTyF_m_fTMg03AYY5ASCg-0XFheiL_1m0U/s691/nsx-load-balancer-traffic-flows.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="568" data-original-width="691" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMbPInRyw5Hgc7N1uCnWempMmE56NvtcDoAUhJYTe79mCNoEOW9BAKvTIxpgAVTPlJO7t475vgolcQd4uKFy8tOHmskce4oCwW-ypqjptNcZTyF_m_fTMg03AYY5ASCg-0XFheiL_1m0U/s320/nsx-load-balancer-traffic-flows.png" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><p><span style="font-family: inherit;">When a user tries to connect to VRA portal, it will make a request using the virtual IP assigned to the load balancer virtual server. The virtual server (VRA Appliance Virtual Server) has a pool of servers (VRA Appliance pool) associated to which it can direct the traffic. The blue path represents normal situation, when the user reaches VRA appliances is the portal. The green path does not actually exist and is the subject of the post. What we need is in case all servers in the VRA Appliance pool are down to redirect the user to another page. For this we need a few additional elements.</span></p><p><span style="font-family: inherit;">First we need a VM that runs an HTTP server and is able to serve a simple html page, called in the diagram above "Sorry Server". We installed apache Apache, enabled SSL and created in the document root path a structure similar to VRA login URL (below document root is /var/www/html) to serve a custom index.html page.</span></p><p><span style="font-family: courier;">/var/www/html/vcac/org/[orgName]/index.html</span></p><p>At NSX level we add the "sorry server" to a new pool, called "vra-maintenance-pool". We also create application rules to check availability of VRA appliances. Application rules are written using HAProxy syntax and they are used to manipulate traffic at the load balancer side. It's a simple rule, where we first check if there are any servers up and running in VRA appliance pool using an access control list (acl). If the pool is down, acl becomes true and we use another backend pool - the maintenance one:</p><p><span style="font-family: courier;"># detect if vra appliance is still up </span></p><p><span style="font-family: courier;">acl vra-appliance-down nbsrv(vra-appliance-pool) eq 0</span></p><p><span style="font-family: courier;"># use pool "vra-maintenance-pool" if app is dead</span></p><p><span style="font-family: courier;">use_backend vra-maintenance-pool if vra-appliance-down</span></p><p>The rule is then linked to the virtual server of the VRA appliances. Whenever a request comes to the virtual server, the rule is checked and if vra-appliance-pool is down, users will be redirected to the maintenance page. You can extend the rules and redirect users to maintenance pool for other situations that may render VRA useless such as IaaS manager servers down or other IaaS services are down. </p><div style="text-align: left;"><p>Another usage for application rules is restricting access to VRA during scheduled maintenance. In this case the rule will use ACL to restrict IP's accessing VRA virtual servers by matching the source IP of the request.</p><p><span style="font-family: courier;"># allow only vra components and management server </span></p><p><span style="font-family: courier;">acl allowed-servers src 192.168.1.1 192.168.10.10 192.168.20.10</span></p><p><span style="font-family: courier;"># send everything else to maintenance page</span></p><p><span style="font-family: courier;">use_backend vra-maintenance-pool if !</span><span style="font-family: courier;">allowed-servers</span></p><p><span style="font-family: inherit;">Traffic is redirected to maintenance pool when it comes from a source different than the VRA itself or the management server. Happy patching! </span></p></div><p><br /></p>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-30641193935412916582020-12-02T20:34:00.001+02:002020-12-02T20:40:07.510+02:00Using NSX load balancer as a monitoring tool for RESTful APIs <p>We are going to look at a different use case for NSX load balancer - monitoring tool for external API's. </p><p>Our core platform is integrating with other systems using <a href="https://en.wikipedia.org/wiki/Representational_state_transfer" target="_blank">RESTful API's</a>. These systems even though they are built with high availability in mind, they are sometimes highly unavailable. They are also part of the critical path for our core platform. Not being able to reach the systems creates troubles in the form of incident tickets because we fail to deliver services to our customers. So we needed a way to monitor those API's.</p><p>We know that the systems are monitored, but we don't have access to those tools. We have our own tools, but they do not offer a simple and efficient way to check the status of a RESTful API. Ideally we don't want to introduce another monitoring tool. However, the core platform is running on top of NSX and it actually uses NSX load balancers for its internal service. So why not use load balancers to monitor the external services? </p><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIc6D2fZ9bW9gKSzwC6J2P8wbPDHGvpSuy0Ss1aBhtiey91K-R2cov-_ln9hrII2woBVdjzn_az4F-PzOO9gHm28p_MKQcZQWw27CMkqxXhXn2diRi8G4rcADYJoqcpS4topInnBkjrRM/s1076/high-level-overview-nsx-lb.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="418" data-original-width="1076" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIc6D2fZ9bW9gKSzwC6J2P8wbPDHGvpSuy0Ss1aBhtiey91K-R2cov-_ln9hrII2woBVdjzn_az4F-PzOO9gHm28p_MKQcZQWw27CMkqxXhXn2diRi8G4rcADYJoqcpS4topInnBkjrRM/s320/high-level-overview-nsx-lb.png" width="320" /></a></p><p>We created a service monitor and a pool in the load balancer for each of the external systems. This way NSX monitors the status of the RESTful API of the system and generates alerts whenever it is down. The status of the pool is then checked by the core platform. All communication between core platform and APIs goes directly. It does not use the load balancer.</p><p>The pool contains the RESTful API endpoint of the system that we use to connect directly from the core platform. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_0kjugtwRINBTfA2bHJ_CUjx3ik0TfhJqeRaRL0L-aLhKb34FZ4w_ow_JlNdIQho_e6oWEUcI_s7uV8ywiwU6gc0hQtjPySzQ9edgrJJcLPK3EonqH0hFaezjQEKhrAOxuHBmU-nbo_Y/s862/dns-pool-1.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="668" data-original-width="862" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_0kjugtwRINBTfA2bHJ_CUjx3ik0TfhJqeRaRL0L-aLhKb34FZ4w_ow_JlNdIQho_e6oWEUcI_s7uV8ywiwU6gc0hQtjPySzQ9edgrJJcLPK3EonqH0hFaezjQEKhrAOxuHBmU-nbo_Y/s320/dns-pool-1.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9xZU6pxG5a6NW1PsfPsXpPgR-3zceDYH1xp9ww7WvOYV9LwOh-7RFqx_0R3ynkACnOLXDr14Be50YDm5E6AxaDTwMjkbZFZcC4N7Whyv-Fdjgmxd_iM_9WvrYkQTvO4-g_s91AO-fiGs/s862/dns-pool-2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="668" data-original-width="862" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9xZU6pxG5a6NW1PsfPsXpPgR-3zceDYH1xp9ww7WvOYV9LwOh-7RFqx_0R3ynkACnOLXDr14Be50YDm5E6AxaDTwMjkbZFZcC4N7Whyv-Fdjgmxd_iM_9WvrYkQTvO4-g_s91AO-fiGs/s320/dns-pool-2.png" width="320" /></a></div><p>The service monitor uses GET requests to check the availability of the RESTful API. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOD91DCKZMwRUWfuhk8BXjco5MEoFOxD6FCZSDCUf29OFOSiG4GlrvTgqxPpszeNPGF6qF3_zhie5jt8B2wPSmdZSh1yC9oLWB6PG1O988MAg0vwLi5_Yf3vFIdnBPq5rB4y7hZinOix8/s668/dns-service-monitor-1.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="668" data-original-width="575" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOD91DCKZMwRUWfuhk8BXjco5MEoFOxD6FCZSDCUf29OFOSiG4GlrvTgqxPpszeNPGF6qF3_zhie5jt8B2wPSmdZSh1yC9oLWB6PG1O988MAg0vwLi5_Yf3vFIdnBPq5rB4y7hZinOix8/s320/dns-service-monitor-1.png" /></a></div><p><br /></p>Nothing fancy, basic configuration for a load balancer. Half-configuration actually because here we stop as no traffic goes through the load balancer to these pools. But whenever the external system is not reachable, the load balancer knows it because now the external system is a member in on of its pools: <p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8YLYOfXizG6k5xKzFKhi5aN23NS2f5O4AcsOH463_1PLr6V0yhVg4WE8UMyvHs-Kh7Iw7U3sFaA9sSI11-uAQGAlAxhh7sge0voQBtE8JXFe-WbmS4LYkyn1kaS4HqjP5hcD6pLMHvpw/s1073/dns-pool-alert-message.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="111" data-original-width="1073" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8YLYOfXizG6k5xKzFKhi5aN23NS2f5O4AcsOH463_1PLr6V0yhVg4WE8UMyvHs-Kh7Iw7U3sFaA9sSI11-uAQGAlAxhh7sge0voQBtE8JXFe-WbmS4LYkyn1kaS4HqjP5hcD6pLMHvpw/s320/dns-pool-alert-message.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">The status of the member in the pool is accessible through the RESTful API of NSX Manager. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><div style="background-color: #1e1e1e; color: #d4d4d4; font-family: Consolas, "Courier New", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div>GET /api/4.0/edges/{edge-id}/loadbalancer/statistics</div></div></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><div style="background-color: #1e1e1e; color: #d4d4d4; font-family: Consolas, "Courier New", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div><span style="color: grey;"><</span><span style="color: #569cd6;">status</span><span style="color: grey;">></span>DOWN<span style="color: grey;"></</span><span style="color: #569cd6;">status</span><span style="color: grey;">></span></div><div><span style="color: grey;"><</span><span style="color: #569cd6;">failureCause</span><span style="color: grey;">></span>layer 7 response error, code:400 Bad Request<span style="color: grey;"></</span><span style="color: #569cd6;">failureCause</span><span style="color: grey;">></span></div><div><span style="color: grey;"><</span><span style="color: #569cd6;">lastStateChangeTime</span><span style="color: grey;">></span>2020-12-02 18:20:43<span style="color: grey;"></</span><span style="color: #569cd6;">lastStateChangeTime</span><span style="color: grey;">></span></div></div></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">This way the core platform knows the status of its external systems before doing anything. More important core platform can now act on that status. In this case it will wait a specific period of time until trying again to use the system. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">It is a pretty simple solution. It is also pretty obvious that the APIs should have been monitored. We actually relied too much on the availability of those API's and used a fire and forget approach. The approach was far from optimal and it impacted our KPIs and created additional operational workload. </div><br />razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0tag:blogger.com,1999:blog-555576322574773333.post-79621310317139228402020-11-03T10:01:00.006+02:002020-11-03T10:01:57.618+02:00About vSphere Cluster sizing, vMotions and DRS<p>This post applies for the ones that are (still) running older versions of vSphere. In my case it is vSphere 6.5</p><p>We came across recently on situation where we had a 16 host cluster and a very peculiar situation. Out of the 400 VMs in the cluster, 200 where hosted on 2 hosts and the rest of 200 on the remaining 14 hosts. We were in the process of migrating VMs to this cluster from another one, but we were expecting DRS to actually distribute the VMs more evenly across the hosts. No, the VMs did not compete for memory or cpu, however having 100 VMs on the same host while other hosts are running less than 20 can cause issues in the case of a host failure event. </p><p>We were aware of DRS not being aware of vSphere HA, but this was not the case since the VMs were live migrated. The VMs did not compete for memory or CPU because the hosts have sufficient resources and the average memory size in this case was small. </p><p>The issue was fixed with manual redistribution across the cluster and human selection of the hosts during vMotion. One of the lessons we learnt is to better design the host size to match average VM size and have a pretty good idea on how many VMs we want to run on a host. If 100 is not acceptable, then make the hosts smaller (or the VM bigger :-) )</p><p>Let's do a bit of math too:</p><p>We have 300 VMs with an average memory size of 12 GB and and average CPU size of 3 vCPU's for a total of 3600 GB of RAM and 900 vCPUs. For 1:3 physical core to vCPU oversubscription we need 300 physical cores - a 24 core CPU with HT enabled will provide 48 cores. On a dual socket server we can get 96 threads so we could fit 300 VMs on 3 servers with 1:3.125 oversubscription ratio. Add 1.5 TB of RAM on each ESXi host and you have your 100 VMs per host. But this is exactly the case we wanted to avoid. The alternative is we downsize to smaller CPU's, less RAM and more physical ESXi hosts. Let's aim for 60 VMs per host. We know we will need 5 hosts to accommodate the load with 60 cores each ESXi host and 720 GB of RAM. Between the two, I would choose to the second one. I think I should right size the host capacity to fit the workload rather than putting a lot of resources in there. And don't forget about the N+1 failure tolerance of the the cluster. </p>razzhttp://www.blogger.com/profile/15640437268011558371noreply@blogger.com0