In this blog I m going to cover how vRealize Automation / vRealize Automation Cloud integrate out of the box with vSphere with Tanzu that will help empower DevOps teams to easily request, provision and operate a TanzuKubernets Grid (TKG) as a service.
vRealize Automation is a Multi-Cloud modern infrastructure automation platform with event-driven state management, designed to help organizations control and secure self-service for Private Cloud, Hybrid Cloud and Public Cloud automation with governance with a DevOps based infrastructure delivery approach.
( Click the Image to Enlarge )
It helps improve IT agility, productivity and efficiency so you can prepare for the future of your business, providing organizations a consistent way of automation across cloud and data centers. Read More!
As for vSphere 7 with Tanzu, its considered the biggest release of vSphere in over a decade. It enables millions of IT administrators across the globe to get started with Kubernetes workloads within an hour give or take.
Its truly a new generation of vSphere for containerized applications and the fastest path to Kubernetes. This single, streamlined solution bridges the gap between IT operations and developers with a new kind of infrastructure for modern, cloud-native application both on premises and in public clouds.
( Click the Image to Enlarge )
On one side it empower developers with secure, self-service access to a fully compliant and conformant Kubernetes API and on the other side it empower IT Operators with visibility into Kubernetes workloads and clusters and it allows them to manage policies for an entire group of VMs, Containers or both with a unified platform. Read More!
A Tanzu BasicvSphere 7.0.3 Update 3c environment with Workload Management enabled.
An instance of vRealize Automation 8.6.1 or higher on premises Or vRealize Automation Cloud (SaaS).
To simplify the steps I have created a flowchart as a reference that we will be going through to outline and explain all the needed steps, describing each step with screenshots to help you follow along to configure the integration and allows you to provision both Supervisor Namespaces and Tanzu Kubernetes Grid Clusters using vRealize Automation.
( Click the Image to Enlarge) ( Click Again to Zoom In & Out)
But Wait, There’s More!
Make sure you watch my upcoming video for this blog post on YouTube If you want to see me going over the above flowchart step-by-step and doing a live demo provisioning a Supervisor Namespace , A Tanzu Kubernetes Cluster from vRealize Automation using VMware Cloud Templates and the Self-Service Portal.
I will also be outlining some lessons learned and a couple of enhancement opportunities that I personally would love to see and at the end deploy an actual Kubernetes Voting-App on the provisioned Kubernetes Cluster from the command line using Kubectl.
If you like the content and want to see more, please make sure to like the video, subscribe to the VMwareLab YouTube channel and hit the notificationIcon, all so you don’t miss any upcoming blogs or videos, not to mention that It also helps the channel a ton, so I can continue producing and putting more content out there.
VMware vRealize Automation ITSM Application 8.2 is available now in the ServiceNow Store here
VMware vRealize Automation speeds up the delivery of infrastructure and application resources through a policy-based self-service portal, running on-premises or as a service that help organizations increase business, IT agility, productivity, and efficiency.
The solution delivers Day 1 service provisioning and Day2 operational capabilities across a private, hybrid and multi-cloud with the ability to assemble application blueprints using a visual canvas with a drag and drop interface as well as create infrastructure as code blueprints.
The vRealize Automation ITSM plugin for ServiceNow provides an out of the box integration between ServiceNow and vRealize Automation catalog and governance model. It enables ServiceNow users to deploy virtual machines and perform basic day 2 operations on their CMDB assets.
If you have any questions or comments, please leave it in the comment section of either the blog post here or within each of the YouTube video’s comment section, also please take a minute and hit the like button if you liked the video.
To stay up to date with my latest blogs and videos, make sure to follow my blog site and do subscribe to my YouTube channel VMwareLab and smash that notification bell if you want to be notified when I upload new content.
Welcome to VMwareLab “Your VMware Cloud Management Blogger”
With vRealize Automation you can use an external IPAM provider to manage IP address assignments for your blueprint deployments.
In this integration use case, you use an existing IPAM provider package, in this case its an Infoblox package, and an existing running vRealize Automation environment to build a provider-specific IPAM integration point.
You configure an existing network and create a network profile to support IP address allocation from the external IPAM provider. Finally, you create a blueprint that is matched to the network and network profile and deploy networked machines using IP values obtained from the external IPAM provider.
The InfobloxIPAM Plug-in allows us to easily integrate vRealize Automation 8.1 and vRealize Automation Cloud with the Infoblox DDI appliance.
One of the main features of Using Infoblox DDI, is that it allows IT Teams to consolidate DNS, DHCP and IP address management into a single platform, deployed on-site and managed from a common console.
The Infoblox IPAM plugin 1.1 for vRealize Automation 8.1 integration allows us to use the IP address allocation and DNS record creation and deletion with our Cloud Assembly or Service Broker deployments.
The plugin is available on the VMware Solution Exchange and uses (ABX) Action Based Extensibility to retrieve IP data from the Infoblox grid as well as update the grid with DNS host records and other data for the deployed virtual machines (VM) and networks.
vSphere private cloud
vRealize Automation 8.1
Infoblox NIOS or vNIOS appliance with minimum WAPI 2.7 version
Infoblox grid is configured for IPAM and DNS
A good place to work and an ice cold beer.
In this video blog we are going to go through all the steps required to install, configure, and use the Infoblox IPAM plugin 1.1 for vRA 8.1 / vRA Cloud.
Let’s get started, Eh!
The vRA 8.1 Infoblox IPAM plug-in v1.1 is currently managed by VMware. The plug-in is not officially supported by Infoblox currently but Infoblox is actively working towards certifying / providing support for this plugin.
Plugin functionality is currently limited to IP address allocation/de-allocation, network creation/deletion, and DNS record creation/deletion.
If you happen to use a signed certificate on Infoblox ( Self-Signed Cert Shouldn’t have this issue ), You may encounter the following error “Unable to validate the provided access credentials: Failed to validate credentials”knowing for sure that your credentials are correct, you might have an Infoblox certificate issue. To fix that you can check my colleague Dennis Derks blog here .
If you use custom DNS views in Infoblox (internal, external, etc.) then some additional configuration is required that’s not easily identified. To fix that check this blog here
If you have any comments please leave it in the comment section of either the blog here or in the you-tube video comment section, please hit the like button if you liked the video.
To stay up to date with my latest blogs and videos, make sure to follow my blog site and do subscribe to my YouTube channel VMwareLab and smash that notification bell.
After spending an enormous amount of time, which I think started somewhere in the summer of last year to get vSphere Customization to work with Cloud-init while using vRealize Automation 8 or vRealize Automation Cloud as the automation platform to provision virtual machine deployments and install, configure the applications running on it.
I finally have a workaround that I can say is guaranteed to work every single time, until something better comes along that would help with the vSphere customization and cloud-init conflict during startup.
With some out-of-the-box thinking, I was able to use IP static assignment ( assignment: static ) within the vRA blueprints to leverage the IP Static pool and the network metadata that we define in vRA via Network Profiles for the targeted networks we want to connect to, while using cloud-init with Ubuntu 16.04, Ubuntu 18.04 and Ubuntu 20.04 for now, but the principle should be the same for other Linux distributions, even though it seems that RHEL is the only OS today that just works provided traditional Guest OS Customization GOSC is being set in cloud-init.
Update ( 26/04/2022) If you trying to use cloud-init with Ubuntu 20 .. Please be aware of this KB as without its resolution, cloud-init will not be able to use the OVF as a datasource therefore userdata will not be passed to the VM when using Cloud-Config in vRealize Automation VMware Cloud Template.
Note: The will also work if you were to use DHCP IP Assignment.
Hoping this was worth the time, I am documenting in this blog the step by step instructions on how to prepare your vSphere templates while leveraging cloud-init, in addition to for your own reference, a list of all the internet available resources that I looked at while doing my research.
I will also have a video added to the blog later that showcases going through the entire template preparation and also demo after that a typical vRA 8 deployment using static IP assignment while leveraging cloud-init to install selected packages per machine component and execute various commands to setup an application.
I still say that this shouldn’t be that hard for our customers to setup and hopefully Software Component like I mentioned would save us all from all this complexity, of-course this is beside the fact that you still can do this via various configuration management tools such as Ansible and puppet which by the way vRealize Automation 8 and cloud integrate with today out-of-the-box.
In a high level when the virtual machine first boots up and gets rebooted to be customized due to the dynamic vCenter customization specs that gets created based on the fact we are using the assignment static property ( assignment: static ) within the blueprint code as you see in the screenshot below, I am making sure that during that time, Cloud-init is in a disabled state.
After the customization reboot the virtual machine once, there is a Cron Job that I created on the template that execute at startup after a 90 sec of sleep which is enough time for the virtual machine to be customized, rebooted and connected to the network without running the Cron Job as of yet. After the initial reboot and pass the 90sec mark now the Cron Job execute a shell script that enables cloud-init and initializes it running all the needed cloud-init modules. ( init, Config and Final)
Note: Feel free to increase the 90 sec if you feel you need more time as the virtual machine being customized.
The End result, the virtual machine is now customized with an updated host-name and an IP from our targeted static IP pool configured for the network its connected to without having to hack the Cloud Config code any further to setup things like the host-name or even configure the network itself, and more importantly without conflicting with cloud-init which what the problem was all along.
Let’s get started, Eh!
Build a new Ubuntu 16.04 or 18.04 virtual machine from the certified ISO
Once the virtual machine is up and running update the list of available packages and install any new available version of these packages that you have to update your template
sudo apt-get update && sudo apt-get -y upgrade
Install Cloud-init for Ubuntu 16.04. Ubuntu 18.04 have cloud-init pre-installed so you can skip this step
sudo apt-get -y install cloud-init
Configure OVF as your Datasource, then save and exit
sudo dpkg-reconfigure cloud-init
Enable traditional Guest OS Customization GOSC Script by editing /etc/cloud/cloud.cfg file and adding
Ensure network configuration is disabled in /etc/cloud/cloud.cfg, by adding or un-hashing the following if it exists:
If a cloud-init network config is not found and no disable option is specified then cloud-init will default to a fallback behavior which is to use DHCP if you happen to reboot the server.
By specifying the “disabled” option we are telling cloud-init not to try and do anything with the network on each subsequent startup which allows the guest OS to use the config that was originally applied to the machine on first run.
Set Temp not to clear, by editing /usr/lib/tmpfiles.d/tmp.conf and adding the prefix # to line 11.
#D /tmp 1777 root root -
Configure Open-vm-tools to start after dbus.service by editing /lib/systemd/system/open-vm-tools.service file and adding the following under the [Unit] section.
Reduce the raise network interface time to 1 min by editing /etc/systemd/system/network-online.targets.wants/networking.service file and changing: ( This not applicable on Ubuntu 18.04 )
TimeoutStartSec=5min to TimeoutStartSec=1min
Disable cloud-init on First Boot and until customization is complete by creating this file /etc/cloud/cloud-init.disabled
sudo touch /etc/cloud/cloud-init.disabled
Create a script your_script.sh in a known location that will be called by a Cron Job that will create later to enable and initialize cloud-init after the customization reboot. The script should contain the following commands:
Configure the Template Cleaning script to be an executable as well
sudo chmod +x your_clean_script.sh
Execute the Template Cleaning Script.
Shutdown the virtual machine and turn it into a template.
Shutdown -h now
Note : Just be aware that the cron job might run if you try to update the template for any reason . So make sure if you do pass 90 sec while doing your change is to re-add the /etc/cloud/cloud-init.disabled file and then re-execute the clean up script again before shutting down the template . if you don’t, cloud-init will execute on first boot and you will get the vm customization but your cloud config code wont be applied
Click To See It All In Action On my YouTube Channel !
I have scripts on githubthat your welcome to download or fork where you can apply on a base image once its build to prepare it for cloud-init use
There are 4 scripts that you can execute on base CentOs/RHEL or Ubuntu to install cloud-init and configure the image template to work with vSphere customization with DHCP or IP Static assignments
There are two files for each of the linux distro, the ones with a myblog at the end of the file name uses a cron job approach that I used in my blog and the one without, uses a custom runonce service approach that we create instead of using a cron job. Both works but at the end these are two different approaches , your welcome to use which ever one you prefer.
The script will also create both the runonce and clean scripts in the /etc/cloud folder before it runs them at the end before shutting down the VM and then you manually converting it to a template.
Make sure after doing a git clone to Convert Windows-style line endings to Unix-style to remove any carriage return character, otherwise you will get an error like this when you try to execute the script :
“Bash script and /bin/bash^M: bad interpreter: No such file or directory [duplicate]”
Though there are some tools (e.g. dos2unix) available to convert between DOS/Windows (\r\n) and Unix (\n) line endings, you’d sometimes like to solve this rather simple task with tools available on any Linux box you connect to. So, here are an example how to use the sed command to do that quickly:
Continuing again on the same theme – Make the Private Cloud Easy – that we mentioned in the two previous blog post vRA 7.3 What’s New – Part 1 and What’s New – Part 2 we will continue to highlight more of the NSX integration Enhancements and for this part of the series we will be focusing on the EnhancedNAT Port Forwarding Rules.
So let’s get started Eh!
EnhancedNAT Port Forwarding Rules
You now have the ability as you configure the On-Demand NAT Network in the CBP (Converged Blue Print) – to create forwarding NAT rules at design time, to a One-To-Many type NAT network component when you associate it with a Non-ClusteredvSphere Machine component or an On-Demand NSX load balancer component.
You can define NAT rules for any NSX-supported protocol then map a port or a port range from (Source) the external IP address of an Edge to (Destination) a private IP address in the NAT network component.
These Rules can be set in a specific Order when configured at design time. it Also can be added, removed, and re-ordered after you create them for an existing deployment as a day-2 action/operation.
This will only work with One-To-Many type NAT network component, which means that One-To-One type NAT network component isn’t supported to create NAT rules for, in the CBP.
NAT Type One-to-Many
Also the NAT network component can be only connected to a Non-Clustered vSphere Machine which means the number of configured instances for the vSphere Machine in the blueprint can’t be more than 1 for the instances minimum and maximum setting, a user can request for a deployment.
D-NAT Rules that can be Ordered
If you must use a Clustered vSphere Machine, you have to leverage an On-demand load balancer if you want to create a NAT rule on One-To-Many type NAT network component that can be associated with the VIP network of the an NSX load balancer component.
Clustered Machine > 1 x Deployment
Load Balancer VIP settings depending on the network association
In the above picture because that NAT rules are publishing HTTP-Port 80 and HTTPS-Port 443 on the external IP address of an Edge, then mapping those ports to the private IP and destination ports HTTP-Port 8080 and HTTPs-Port 8443 of the destination vSphere Machine and since the Load balancer VIP network is on the internal private network connected to NIC 0 of the clustered vSphere machines, we create the virtual servers on load balancer using HTTP-Port 8080 and HTTPs-Port 8443.
Again I really want to highlight the fact that the following elements are not supported for creating NAT rules:
NICs that are not in the current network
NICs that are configured to get IP addresses by using DHCP
Machine clusters without the use of a Load balancer
One-To-One type NAT network component
Change NAT Rules in a Exiting Deployment
Now after a successful deployment that includes 1 or more NAT forwarding rules, a user can later add, edit, and delete any existing NSX NAT rules in a deployed one-to-many NAT network. The user/owner can also change the order in which the NAT rules are processed just like how we showcased when you can do that during the design of the blueprint.
Important Notes :
The Change NAT Rules operation is not supported for deployments that were upgraded or migrated from vRealize Automation 6.2.x to this vRealize Automation release.
You cannot add a NAT rule to a deployment that is mapped to a third-party IPAM endpoint such as Infoblox.
a user must log in to vRA as a machine owner, support user, business group user with a shared access role, or a business group manager to be entitled to change a NAT rules in a network.
Once that is verified, a user can :
Select Items > Deployment.
2. Locate the deployment and display its children components.
3. Select the NAT network component to edit.
4. Click Change NAT Rules from the Actions menu.
5. Add new NAT port forwarding rules, reorder rules, edit existing rules, or delete rules. What ever makes you happy!!
6. When you have finished making changes, click Save or Submit to submit the reconfiguration request.
7. Check the status of your request under the Request Tab, that it is successful.
8. In my case i have simply changed the order where I placed the HTTPS forwarding NAT rule to apply first. so you if you click on the Request ID after its successfully complete you will see just that.
This was short and sweet, hope you enjoyed it. Now go give it a shot.
Continuing on the same theme – Make the Private Cloud Easy – that we mentioned in the pervious blog post vRA 7.3 What’s New – Part 1 , we will highlight the NSX integration Enhancements for just the NSX Endpoint and On-Demand Load balancer that was added in this release. there are a lot more enhancement around the NSX integration that will touch on in other parts of this What’s new blog series but because I want to make each part short and sweet, I am going to just talk about the above mentioned enhancements
So let’s get started Eh!
First thing first, with the new release of vRA 7.3 you can now create you own independent NSX Endpoint and then associate its NSX settings to an existing vSphere/vCenter endpoint. As you probably know or maybe you don’t, that in the pervious version prior to vRA 7.3, the NSX Manager was add as part of the vSphere/vCenter endpoint creation.
To create a new NSX Endpoint – >Select Infrastructure > Endpoints > Select New > Network and Security > NSX.
Adding New NSX Endpoint
Now if your like me happen to do an upgrade or perhaps migrated a vSphere/vCenter endpoint that was using an NSX Manager to a vRA 7.3 instance, a new NSX Endpoint is created for you that contains an association between the source vSphere/vCenter endpoint and a new created NSX endpoint.
Existing NSX Endpoint
NSX Endpoint vSphere to NSX Association
On-demand Load Balancer Controls
if you worked with vRA and tried to create a blueprint you know that if you have NSX configured for vSphere, that you can drag an NSX on-demand load balancer component onto the design canvas and configure its settings for use with vSphere machine components and container components in the blueprint.
With the new release we made it even better and added many enhancements that allows you now to have full control on how the load balancer can be configured and deployyed on request time when requesting aCentric networking and security based type of an application.
When you add a load balancer component to a blueprint in the design canvas, you can choose either a default or custom option when creating which is a new feature you couldn’t do before or just like the pervious release, editing your virtual server definitions in the load balancer component.
The default option allows you to specify the virtual server protocol ( HTTP, HTTPS, TCP, UDP ), port, and description and use defaults for all other settings such as Distribution, Health Check and Advanced settings such as connection limits, etc which therefor are all dimmed and disabled.
3. The custom option allows you to define additional levels of detail for Distribution, Health Check and even more advanced settings that you can configure and define.
In the Distribution tab you can specifies the algorithm balancing method for this pool member.
ROUND_ROBIN: Each server is used in turn according to the weight assigned to it.
IP-HASH: Selects a server based on a hash of the source IP address and the total weight of all the running servers.
LEASTCONN: Distributes client requests to multiple servers based on the number of connections already on the server. New connections are sent to the server with the fewest connections.
URI: The left part of the URI (before the question mark) is hashed and divided by the total weight of the running servers. The result designates which server receives the request. The URI is always directed to the same server as long as no server goes up or down.
HTTPHEADER: The HTTP header name is looked up in each HTTP request. If the header is absent or does not contain a value, the round robin algorithm is applied.
URL: The URL parameter specified in the argument is looked up in the query string of each HTTP GET request. If no value or parameter is found, then a round robin algorithm is applied.
You can also Specifies how persistence tracks and stores session data. Requests are directed to the same pool member for the life of a session or during subsequent sessions.
None : No persistence. Session data is not stored or tracked.
Cookie : Uses a unique cookie to identify the session the first time that a client accesses the site. In subsequent requests, the cookie persists the connection to the appropriate server.
Source IP : Tracks sessions based on the source IP address. When a client requests a connection to a virtual server that supports source address affinity persistence, if the client has previously connected it is returned to the same pool member.
MSRDP :Maintains persistent sessions between Windows clients and servers that are running the Microsoft Remote Desktop Protocol (RDP) service.
SSL Session ID : Uses an NSX-supported HTTPS traffic pattern to store and track sessions
Health Check Tab
The Health Check tab allows you to specify the port number on which the load balancer listens to monitor the health of the virtual server member and the URL is used in the sample request to check a web site’s Health based on the available settings.
in the advanced tab you further configure the NSX virtual server for things like
Connection limit: The maximum concurrent connections in NSX that the virtual server can process. This setting considers the number of all member connections. ( 0 = no limit )
Connection rate limit: The Maximum number of incoming connection requests in NSX that can be accepted per second. This setting considers the number of all member connections. ( 0 = no limit )
Enable Acceleration: Specifies that each virtual IP uses faster L4 load balancer rather than the L7 load balancer
Transparent: Allow the load balancer pool members to see the IP address of the machines that calling the load balancer. if not selected, the member of the load balancer pool members see the traffic source IP address as the load balancer internal IP address
Max Connections: The maximum number of concurrent connections that a single member can recognize. if the number of incoming requests is higher than this value, requests are queued and then processed in the order which they are received as connections are released ( 0 = no limit )
Min Connections: The minimum number of concurrent connections that a single member must always accept. ( 0 = no minimum)
Oh my god I can’t believe that this is only a dot-release as you read through the What’s New section in the vRA 7.3 Release Notes, looking at the massive amount of features we are releasing with this release, its just mind blowing.
I can’t describe the amount of excitement that I m experiencing right now that a new version of vRA is officially out and that I can finally talk about it, and showcase some of its new 20+ spotlight features in this multi part vRA 7.3 What’s New blog series.
VMware continues the trend of delivering awesome innovations, improved user experience, and greater / deeper integration into the ecosystem its managing, while aligning its automation technology with the following core investment strategies :
Make the Private Cloud Easy
Manage Across Clouds
In part 1 of this series of vRA 7.3 what’s new blogs, I will be showcasing the Prameterized Blueprints feature which fall under the “Make the Private Cloud Easy” strategy pillar.
But before we get started I thought I would mention these Important upgrade Side Notes :
You must upgrade to either vRealize Automation 6.2.5 or 7.1, before you can upgrade to version 7.3
TheMemory configuration should be increased to 18 GB on the vRA Appliance if you happened to reduce it, like I did myself in my lab otherwise you will get an error like the one below.
System Reboot is required of-course to complete the update, assuming everything went well with the vRA master appliance and its replicas if any.
After you reboot the vRA appliance, Waiting for all services to start update Status appears on the Update Status page. The IaaS update automatically starts when the system is fully initialized and all services are running. So you don’t have to upgrade the IaaS component your self manually like what we used to do with the older editions, BUT instead You can sit back, relax and simply observe the IaaS upgrade progress on the Update Status page. How freaking cool is that Eh!
The automated update process is also supported on the distributed deployment model where after the master vRA appliance is successfully updated, all the replica nodes gets updated as well, after that the focus shifts to the IaaS components and the same thing happens where all the related IaaS services gets updated.
The first IaaS server component can take about 30 minutes to finish, so be patient.
Also note that The active Manager Service node changes from a manual election to a system decision about which node becomes the fail-over server. The system enables this feature during upgrade
So now that we got that out of the way – Big Sigh!- , let’s get started now on the main topic Eh!
Parameterized Blueprints to Enhance Re-usability and Reduce Sprawl
The new Component Profiles allows us to define both Virtual Machine sizes including ( CPU, Memory and Storage ) and source image attributes that helps the infrastructure architect enable what we refer to as the “T-Shirt Sizing” option for blueprint requests where an entitled user can pick from.
This abstraction using the Component Profiles allows us to efficiently manage blueprints by increasing re-usability while significantly reducing blueprint sprawl and simplifying your catalog offerings.
You can use componentprofiles to parameterize blueprints. Rather than create a separate small, medium, and large blueprint for a particular deployment type, you can create a single blueprint with a choice of small, medium, or large size virtual machine. Users can select one of these sizes when they deploy the catalog item.
From a governance and control perspective we continue to have the ability to trigger approval policies but now these approval can be based on the user size or the image selection conditions, including overrides.
The component profiles like everything else can be imported and exported using the vRealize Cloud Client.
The available componentprofile types are Size and Image. When you add componentprofiles to a machine component, the componentprofile settings override other settings on the machine component, such as number of CPUs or amount of storage.
Be aware that you cannot define other or additional component profile types other than those two.
Componentprofiles are only available for vSphere machine components where you can use componentprofiles to define vSphere machine components in a blueprint.
Defining Component Profile Settings
You can define multiple named value sets within the Size and Image component profile types and add one or more of the value sets to machine components in a blueprint. Each value set that you define for the component profile type (Size and Image) contains the following configurable settings:
Name that sequesters see when they provision a machine
Unique identifier for tenant
Set of value choices for each option in the values
When you request provisioning from the catalog, you can select from available value set choices for the Size and Image component profiles. When you choose one of the value sets, its corresponding property values are then bound to the request.
Configuring Component Profile Size Settings for Catalog Deployment
Log in to the vRealize Automation console as an administrator with tenant administrator and IaaS administrator access rights
Click the Size in the name column or highlight it and click Edit
4. Click the Value Sets tab and define a new value set by clicking New tocreate a small and a large size deployment value set for example.
Small Value Set
Now we have two Size Component Profiles as value set
Small ( 1 vCPU, 1GB Mem, 40 GB Storage)
Larege ( 2 vCPU, 4GB Mem, 80 GB Storage)
Next would be to Add one or more value sets to the Size component profile by using the Profiles tab on a vSphere machine component as will see next.
Configuring Machine Blueprint by Adding the Size Component Profile to the Blueprint.
Log in to the vRealize Automation console as an infrastructure architect.
Select Design -> Blueprints
Create a new Blueprint or in our case we will be editing an existing CentOS 7 on vSphere – Base Blueprint.
4. Select the Machine Type and click on Profiles to add the size Component Profile we defined by clicking the +Add link
5. Once added and listed with the profile tab, select the Size Component Profile and click on Edit Value Sets
6. Select the Value Sets you want to associate with the CentOS7 on vSphere – Base Blueprint, here we will select both Small and Large, while setting the Small as the Default and click Ok to configure the size component profile we are configuring for the blueprint with selected Value Sets ( Small and Large )
7. Once your done click finish on the blueprint to save the Blueprint parameters we just added, and your ready to request the CentOS7 on vSphere – Base Blueprint with the configured size parameters.
8. Select the vSphere_Machine within your blueprint deployment you requested and simply select the size of the Machine AKA “T-Shirt Sizing” and submit your request.
We can simply repeat the same process for the Image Component Profile where we define Image value set we can present to the requester as an option to choose from.
Users can select from Linked Clone or Full Clone type images across Windows and Linux type OSs for example . I will leave that one for you to explore my friends.