Infoblox IPAM Plug-in 1.1 Integration with vRealize Automation 8.1 / vRealize Automation Cloud

Hello Everyone

Welcome to VMwareLabYour VMware Cloud Management Blogger

With vRealize Automation you can use an external IPAM provider to manage IP address assignments for your blueprint deployments.

In this integration use case, you use an existing IPAM provider package, in this case its an Infoblox package, and an existing running vRealize Automation environment to build a provider-specific IPAM integration point.

You configure an existing network and create a network profile to support IP address allocation from the external IPAM provider. Finally, you create a blueprint that is matched to the network and network profile and deploy networked machines using IP values obtained from the external IPAM provider.

 

infoblox-vRA-logos-2

 

The Infoblox IPAM Plug-in allows us to easily integrate vRealize Automation 8.1 and vRealize Automation Cloud with the Infoblox DDI appliance.

One of the main features of Using Infoblox DDI, is that it allows IT Teams to consolidate DNS, DHCP and IP address management into a single platform, deployed on-site and managed from a common console.

The Infoblox IPAM plugin 1.1 for vRealize Automation 8.1 integration allows us to use the IP address allocation and DNS record creation and deletion with our Cloud Assembly or Service Broker deployments.

The plugin is available on the VMware Solution Exchange and uses (ABX) Action Based Extensibility to retrieve IP data from the Infoblox grid as well as update the grid with DNS host records and other data for the deployed virtual machines (VM) and networks.

Prerequisites

  • vSphere private cloud
  • vRealize Automation 8.1
  • Infoblox NIOS or vNIOS appliance with minimum WAPI 2.7 version
  • Infoblox grid is configured for IPAM and DNS
  • A good place to work and an ice cold beer.

In this video blog we are going to go through all the steps required to install, configure, and use the Infoblox IPAM plugin 1.1 for vRA 8.1 / vRA Cloud.

Let’s get started, Eh!

 

Important Notes

  • The vRA 8.1 Infoblox IPAM plug-in v1.1 is currently managed by VMware. The plug-in is not officially supported by Infoblox currently but Infoblox is actively working towards certifying / providing support for this plugin.
  • Plugin functionality is currently limited to IP address allocation/de-allocation, network creation/deletion, and DNS record creation/deletion.
  • If you happen to use a signed certificate on Infoblox ( Self-Signed Cert Shouldn’t have this issue ),  You may encounter the following error Unable to validate the provided access credentials: Failed to validate credentials” knowing for sure that your credentials are correct, you might have an Infoblox certificate issue. To fix that you can check my colleague Dennis Derks blog here .

If you have any comments please leave it in the comment section of either the blog here or in the you-tube video comment section, please hit the like button if you liked the video.

To stay up to date with my latest blogs and videos, make sure to follow my blog site and do subscribe to my YouTube channel VMwareLab and smash that notification bell.

 

The End, Eh!

Automation and Orchestration CAS Infoblox IPAM vRA Blueprints vRealize Automation

vRealize Automation 8.1 Multi-Tenancy Setup with vRealize Suite Lifecycle Manager 8.1

Today VMware is releasing VMware vRealize Automation 8.1 , the latest release of VMware’s industry-leading, modern infrastructure automation platform.

This release delivers new and enhanced capabilities to enable IT/Cloud admins, DevOps admins, and SREs to further accelerate their on-going datacenter infrastructure modernization and cloud migration initiatives, focused on the following key use cases:

  • Self-service hybrid cloud, multi-cloud automation, infrastructure DevOps, and Kubernetes infrastructure automation.
  • vRealize Automation 8.1 supports the latest release of VMware Cloud Foundation 4.0 to enable self-service automation and infrastructure DevOps for VMware Cloud-based private and hybrid clouds.
  • Integration with vSphere 7.0 with Kubernetes to automate Kubernetes supervisor cluster and namespace management.

But wait there is more…

We are also releasing VMware vRealize Suite Lifecycle Manager 8.1, the latest release of the lifecycle management and automation platform for VMware vRealize Suite.

vRealize Suite Lifecycle Manager delivers a comprehensive, integrated product and content lifecycle management solution for vRealize Suite, helping customers to speed up deployments and updates, optimize and automate ongoing product and content management, and apply Day 0 to Day 2 operational best practices across all components of vRealize Suite.

Some of the other new enhancements we are introducing in vRSLCM 8.1 are :

  • Product references : Product details page will have now a new tab listing all inter-product integration(s).
  • Inventory Sync : Provided now on the environment level to trigger the sync on all products within the managed environment instead of triggering it on every product which we can still do if we want to Sync one product.
  • Global Environment vIDM Day 2 actions : The global environment vIDM View Details page will have now the Trigger Cluster Health, Power On and Power Off Day 2 Operations for single and cluster vIDM Deployment.
  • All Products Day 2 actions : All products under environments will have now Re-Trust With Identity Manager ( Whenever vIDM Certificate Changes ) and  Re-Register With Identity Manager (Whenever vIDM FQDN Changes )Day 2 Operations.

In this vBlog I’m covering vRealize Automation Multi-Tenancy, its requirement and the setup workflow you need to follow for enabling dedicated infrastructure multi-tenancy for vRealize Automation 8.1, leveraging vRealize Suite Lifecycle Manager 8.1 which offers our customers more flexibility, control and security around tenant management.

Let’s get started, Eh!

Important Notes

  • Certificate update/replace operation. A change in vIDM certificate requires re-trust of vIDM certificate on all products/services currently integrated with it. While updating certificate, user(s) are provided with an option to choose all currently referenced products to opt-in for re-trust.
  • Enabling tenancy. Once tenancy is enabled, vIDM can be accessed only through tenant FQDNs. All existing products/services currently integrated with vIDM must go for a re-register of vIDM against its master tenant alias FQDN. While enabling tenancy, user(s) are provided option to choose all currently referenced products to opt-in for re-register.

If you have any comments please leave it in the comment section of either the blog here or in the youtube video comment section, please hit the like button if you liked the video.

To stay up to date with my latest blogs and videos, make sure to follow my blog site and do subscribe to my YouTube channel VMwareLab and smash that notification bell.

The End, Eh!

Automation and Orchestration Uncategorized vRealize Automation vRealize Suite vRealize Suite Lifecycle Manager

How to Deploy vRA 8.0.1 while dealing with the Built-in containers root password expiration, preventing installations for vRealize Automation 8.0 and 8.0.1

Let’s get into it right away.

A few weeks ago the 90 days account expiry from vRealize Automation 8.0 and 8.0.1 GA releases has been exceeded for both the Postgres and Orchestrator services which runs today as Kubernetes pods.

This issue is resolved in vRealize Automation 8.1 which is soon to be released as of the writing of this post. ( Generally available in 1H20 ).

This issue is also resolved in Cumulative Update for vRealize Automation 8.0.1 HF1/HF2 so if you already installed the HF1 patch a while ago and before the account expiry, you have nothing to worry about.

But what about existing deployments that was not updated with HF1 or HF2 as of yet or net new deployments of vRealize Automation 8.0/8.0.1 and how they may be impacted by this issue. In this blog I address those scenarios in terms of what needs to be done to continue benefiting from everything the automation solution have to offer today and/or have a successful deployment when you do choose to deploy the vRealize Automation 8.0.1 solution until vRealize Automation 8.1 is released then you really don’t have to worry about any of this.

So let’s get started eh!.

Existing Deployments

For existing vRA 8.0 or 8.0.1 customers with active working instances, you have two options before you can reboot the appliance or restart the vRA services:

Option 1

Apply the workaround mentioned in KB 78235 and stay at vRA 8.0.1.

Scenario 1 : vRealize Automation 8.0/8.0.1 is up and running

  1. SSH into each of the nodes
  2. Execute vracli cluster exec -- bash -c 'echo -e "FROM vco_private:latest\nRUN sed -i s/root:.*/root:x:18135:0:99999:7:::/g /etc/shadow\nRUN sed -i s/vco:.*/vco:x:18135:0:99999:7:::/g /etc/shadow" | docker build - -t vco_private:latest'
  3. Execute vracli cluster exec -- bash -c 'echo -e "FROM db-image_private:latest\nRUN sed -i s/root:.*/root:x:18135:0:99999:7:::/g /etc/shadow\nRUN sed -i s/postgres:.*/postgres:x:18135:0:99999:7:::/g /etc/shadow" | docker build - -t db-image_private:latest'
  4. Execute opt/scripts/backup_docker_images.sh to persist the new changes through reboots.

Scenario 2 : vRealize Automation 8.0/8.0.1 is already down as a result.

  1. SSH into each of the nodes
  2. Run opt/scripts/deploy.sh --onlyClean on a single vRA node to shutdown the services safely.
  3. Once completed, Repeat step 2 through 4 in Option 1 – > Scenario 1
  4. Run /opt/scripts/deploy.sh to start the services up.

Option 2

Apply the vRealize Automation 8.0.1 HF1 or HF2  with vRealize Lifecycle Manager 8.0.1 patch 1

Scenario 1 : vRealize Automation 8.0/8.0.1 is up and running

It is recommended to install vRealize Suite Lifecycle Manager 8.0.1 patch 1 before vRealize Automation 8.0.1 patch 1. The vRealize Suite Lifecycle Manager 8.0.1 Patch 1 contains a fix for some intermittent delays in submitting the patch request.

Apply vRealize Automation 8.0.1 patch 1 leveraging vRealize Lifecycle manager 8.0.1 Patch 1.

Scenario 2 : vRealize Automation 8.0/8.0.1 is already down as a result.

  1. SSH into each of the nodes
  2. Run /opt/scripts/deploy.sh --onlyClean on a single vRA node to shutdown the services safely.
  3. Once completed, Repeat step 2 through 4 in Option 1 – > Scenario 1
  4. Run /opt/scripts/deploy.sh to start the services back.Apply vRealize
  5. Apply Automation 8.0.1 patch 1 leveraging vRealize Lifecycle manager 8.0.1 Patch 1

Note: We highly recommend to be always on the more recent builds and patches.

New Deployments

If you need a video tutorial on how to install vRealize Automation 8.x check either my Youtube video on how to deploy vRA 8.x with vRealize Easy Installer here or my previous blog post here which also include the video.

Please subscribe and smash that tiny notification bill to get notified of any new and upcoming videos if you do check my Youtube channel.

Now that is out of the way , for new deployments of 8.0.1 and until 8.1 is released where the issue is resolved, it is really very simple.

Once you see that vRA 8.0.1 is deployed via vRealize suite lifecycle manager 8.0.1 and that its now reachable via the network, do the following:

  1. SSH into the vRA node
  2. Execute Kubectl get pods -n prelude to see if vRA started to deploy a few of the services in the prelude namespace.
  3. Once confirmed proceed to step 4
  4. Execute vracli cluster exec -- bash -c 'echo -e "FROM vco_private:latest\nRUN sed -i s/root:.*/root:x:18135:0:99999:7:::/g /etc/shadow\nRUN sed -i s/vco:.*/vco:x:18135:0:99999:7:::/g /etc/shadow" | docker build - -t vco_private:latest'
  5. Execute vracli cluster exec -- bash -c 'echo -e "FROM db-image_private:latest\nRUN sed -i s/root:.*/root:x:18135:0:99999:7:::/g /etc/shadow\nRUN sed -i s/postgres:.*/postgres:x:18135:0:99999:7:::/g /etc/shadow" | docker build - -t db-image_private:latest'
  6. Execute opt/scripts/backup_docker_images.sh to persist the new changes through reboots.
  7. Keep checking the status of the pods by continually running and executing Kubectl get pods -n preludeuntil all the pods are up and running.

If your only installing one appliance and you noticed that the vco-app pod status is CrashLoopBackOff 

2020-03-28_14-00-53

You will need to delete the pod so a new one gets provisioned from the newly updated docker build that we generated in step 4 by executing the following below command.

kubectl delete pods -n prelude vco-app-pod-name

If your installing a cluster and since we can’t simply delete the postgres pod to fix it –So the other postgres instances on the remaining nodes are able to replicate data-otherwise other services that depends on postgres will also fail so its better to just shutdown all the services on each of the nodes and doing the following:

  1. SSH into the vRA node
  2. Execute Kubectl get pods -n prelude to see if vRA started to deploy a few of the services in the prelude namespace.
  3. Execute /opt/scripts/deploy.sh --onlyClean on each of the nodes to stop the services.
  4. Once completed execute the workaround repeating step 4 through 6
  5. Run /opt/scripts/deploy.shon each of the nodes to start the services up.

Once your appliance or cluster is up and running apply the vRealize Automation 8.0.1 HF1 or HF2. ( Soon to be also released ) as I mentioned above in Option 2 for Existing Deployment.

If you have already one appliance with HF1 you can’t scale out to create a cluster since the original image is not patched with HF1. So unfortunately you have to wait a couple more weeks until 8.1 is out, where then you can upgrade then scale out your deployment to create a cluster production ready deployment.

If you do have any questions please post them below. I will try my best to have them answered.

Hope this has been hopeful if you have made it to the end.

The End Eh!

Automation and Orchestration vRealize Automation

vSphere Customization with Cloud-init While Using vRealize Automation 8 or Cloud.

After spending an enormous amount of time, which I think started somewhere in the summer of last year to get vSphere Customization to work with Cloud-init while using vRealize Automation 8 or vRealize Automation Cloud as the automation platform to provision virtual machine deployments and install, configure the applications running on it.

I finally have a workaround that I can say is guaranteed to work every single time, until something better comes along. i.e. Software components like we have it today within the vRA 7.x platform, which is planned to be released for vRA 8.x in Q3-2020 if everything goes as planned.

With some out-of-the-box thinking, I was able to use IP static assignment ( assignment: static ) within the vRA blueprints to leverage the IP Static pool and the network metadata that we define in vRA via Network Profiles for the targeted networks we want to connect to, while using cloud-init with Ubuntu 16.04 and Ubuntu 18.04 for now, but the principle should be the same for other Linux distributions, even though it seems that RHEL is the only OS today that just works provided traditional Guest OS Customization GOSC is set in cloud-init.

Note: The will also work if you were to use DHCP IP Assignment.

Hoping this was worth the time, I am documenting in this blog the step by step instructions on how to prepare your vSphere templates while leveraging cloud-init,  in addition to for your own reference, a list of all the internet available resources that I looked at while doing my research.

I will also have a video added to the blog later that showcases going through the entire template preparation and also demo after that a typical vRA 8 deployment using static IP assignment while leveraging cloud-init to install selected packages per machine component and execute various commands to setup an application.

I still say that this shouldn’t be that hard for our customers to setup and hopefully Software Component like I mentioned would save us all from all this complexity, of-course this is beside the fact that you still can do this via various configuration management tools such as Ansible and puppet which by the way vRealize Automation 8 and cloud integrate with today out-of-the-box.

How it works?

In a high level when the virtual machine first boots up and gets rebooted to be customized due to the dynamic vCenter customization specs that gets created based on the fact we are using the assignment static property ( assignment: static ) within the blueprint code as you see in the screenshot below, I am making sure that during that time, Cloud-init is in a disabled state.

2020-02-15_11-26-33

After the customization reboot the virtual machine once, there is a Cron Job that I created on the template that execute at startup after a 90 sec of sleep which is enough time for the virtual machine to be customized, rebooted and connected to the network without running the Cron Job as of yet. After the initial reboot and pass the 90sec mark now the Cron Job execute a shell script that enables cloud-init and initializes it running all the needed cloud-init modules. ( init, Config and Final)

Note: Feel free to increase the 90 sec if you feel you need more time as the virtual machine being customized. 

The End result, the virtual machine is now customized with an updated host-name and an IP from our targeted static IP pool configured for the network its connected to without having to hack the Cloud Config code any further to setup things like the host-name or even configure the network itself, and more importantly without conflicting with cloud-init which what the problem was all along.

Let’s get started, Eh!

Template Preparation Steps

  • Once the virtual machine is up and running update the list of available packages and install any new available version of these packages that you have to update your template.

sudo apt-get update && sudo apt-get -y upgrade

    • Install Cloud-init for Ubuntu 16.04. Ubuntu 18.04 have cloud-init pre-installed so you can skip this step

sudo apt-get -y install cloud-init

  • Configure OVF as your Datasource, then save and exit

sudo dpkg-reconfigure cloud-init

  • Enable traditional Guest OS Customization GOSC Script by editing /etc/cloud/cloud.cfg file and adding

disable_vmware_customization: true

  • Ensure network configuration is not disabled in /etc/cloud/cloud.cfg, by deleting or hashing the following if it exists:

network:
config: disabled

Or that any /etc/cloud/cloud.cfg.d/* configuration files, such as
99-disable-networking-config.cfg with the folloing content “network: {config: disabled}” doesn’t exist.

  • Set Temp not to clear, by editing /usr/lib/tmpfiles.d/tmp.conf  and adding the prefix # to line 11.

#D /tmp 1777 root root –

  • Configure Open-vm-tools to start after dbus.service by editing /lib/systemd/system/open-vmtools.service file and adding the following under the [Unit] section.

After=dbus.service

  • Reduce the raise network interface time to 1 min by editing /etc/systemd/system/network-online.targets.wants/networking.service file and changing: ( This not applicable on Ubuntu 18.04 )

TimeoutStartSec=5min to TimeoutStartSec=1min

  • Disable cloud-init on First Boot and until customization is complete by creating this file /etc/cloud/cloud-init.disabled

sudo touch /etc/cloud/cloud-init.disabled

  • Create a script your_script.sh in a known location that will be called by a Cron Job that will create later to enable and initialize cloud-init after the customization reboot. The script should contain the following commands:
sudo rm -rf /etc/cloud/cloud-init.disabled
sudo cloud-init init
sleep 20
sudo cloud-init modules --mode config
sleep 20
sudo cloud-init modules --mode final
  • Configure the script to be an executable

sudo chmod +x your_script.sh

  • Create a Cron Job that will run after 90 sec of sleep at boot by typing crontab -e and entering the following:

@reboot ( sleep 90 ; sh /Script_path/your_script.sh )

  • Copy the content below for the Template Cleaning script and create your_clean_script.sh. You can replace cloudadmin with your own user that you setup when you installed the Ubuntu OS
#!/bin/bash

# Add usernames to add to /etc/sudoers for passwordless sudo
users=("ubuntu" "cloudadmin")

for user in "${users[@]}"
do
cat /etc/sudoers | grep ^$user
RC=$?
if [ $RC != 0 ]; then
bash -c "echo \"$user ALL=(ALL) NOPASSWD:ALL\" >> /etc/sudoers"
fi
done

#grab Ubuntu Codename
codename="$(lsb_release -c | awk {'print $2}')"


#Stop services for cleanup
service rsyslog stop

#clear audit logs
if [ -f /var/log/audit/audit.log ]; then
cat /dev/null > /var/log/audit/audit.log
fi
if [ -f /var/log/wtmp ]; then
cat /dev/null > /var/log/wtmp
fi
if [ -f /var/log/lastlog ]; then
cat /dev/null > /var/log/lastlog
fi

#cleanup persistent udev rules
if [ -f /etc/udev/rules.d/70-persistent-net.rules ]; then
rm /etc/udev/rules.d/70-persistent-net.rules
fi

#cleanup /tmp directories
rm -rf /tmp/*
rm -rf /var/tmp/*

#cleanup current ssh keys
#rm -f /etc/ssh/ssh_host_*

#cat /dev/null > /etc/hostname

#cleanup apt
apt-get clean

#Clean Machine ID

truncate -s 0 /etc/machine-id
rm /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id

#Clean Cloud-init
cloud-init clean --logs --seed

#cleanup shell history
history -w
history -c

  • Configure the Template Cleaning script to be an executable as well

sudo chmod +x your_clean_script.sh

  • Make sure you can switch to user root by editing the fine /etc/ssh/sshd_config and changing PermitRootLogin to yes

PermitRootLogin yes

  • Set a password for root

sudo passwd root

Note: The reason for the above is to be able to execute the clean script with no issues as I personally had issues executing the clean up script with sudo working with the Ubuntu 18.04. you can always revert it back once the cleanup template is executed. 

  • Execute the Template Cleaning Script.

sudo ./Script_path/your_clean_script.sh

  • Shutdown the virtual machine and turn it into a template.

Shutdown -h now

Note : Just be aware that the cron job will also run if you try to update the template for any reason . So make sure if you do pass 90 sec while doing your change is to re-add the /etc/cloud/cloud-init.disabled file and then re-execute the clean up script again before shutting down the template .

if you dont cloud-init will execute on first boot and you will get the vm customization but your cloud config code wont be applied

Click To See It All In Action On my Youtube Channel !

 

Happy Template Building! Please share! 

The End Eh!

Resources:

https://ubuntu.com/engage/cloud-init-whitepaper
https://debconf17.debconf.org/talks/164/
https://cloudinit.readthedocs.io/en/latest/
https://events.linuxfoundation.org/wp-content/uploads/2017/12/Cloud-init-The-cross-cloud-magic-sauce_Smith_moser.pdf
https://www.youtube.com/watch?v=RHVhIWifVqU
https://www.youtube.com/watch?v=y8WA1BUlT-Q
https://linuxtechlab.com/executing-commands-scripts-at-reboot/
https://blogs.vmware.com/management/2019/02/building-a-cas-ready-ubuntu-template-for-vsphere.html
http://kb.vmware.com/s/article/56409
https://kb.vmware.com/s/article/59687
http://kb.vmware.com/s/article/59557
http://kb.vmware.com/s/article/2378666
https://blah.cloud/infrastructure/using-cloud-init-for-vm-templating-on-vsphere/
http://ubuntu.com/blog/cloud-init-v-18-2-cli-subcommands
http://lucd.info/2019/12/06/cloud-init-part-1-the-basics/

 

Blueprinting Cloud Automation Services Cloud-init vRA Blueprints vRealize Automation vSphere Customization

Part 3: vRealize Automation 8.0 Deployment with vRealize Suite Lifecycle Manager 8.0

In Part 2 of my vRealize Automation 8.0 blog video series, we have upgraded vRealize Lifecycle Manager 2.1 to 8.0 by performing a side by side migration leveraging the vRealize Easy Installer while importing the management of both VMware Identity manager 3.3.0 and the vRealize Suite 2018 environment.

In this blog video we will be using vRealize Lifecycle Manager 8.0 to deploy vRealize Automation 8.0 in a new environment.

Now as for requirements you will need :

  1. vRealize Lifecycle Manager 8.0
  2. VMware Identity Manager 3.3.1
  3. A new Hostname, IP Address and a DNS record for the new vRA 8.0 appliance that the vRealize Suite Lifecycle Manager 8.0 will be creating.
  4. Product Mapping is set with the install and upgrade binaries for the new vRealize Suite 2019 Products.

 

Deployment Workflow

2019-10-23_10-13-17

Please note that the installation process in the video after hitting submit is fast forwarded.

The End, Eh!

Automation and Orchestration vRealize Automation vRealize Suite vRealize Suite Lifecycle Manager

Part 2: Migration of vRSLCM 2.x Version to vRealize Suite Lifecycle Manager 8.0

If you happen to have an existing vRSLCM 2.x and vIDM 3.3.0 in your environment then you will need the vRealize Easy Installer to migrate your existing vRSLCM 2.x instance to vRSLCM 8.0.

Once your migration to vRSLCM 8.0 is completed you can upgrade your vIDM instance to 3.3.1 since its a requirement before you can install vRealize Automation 8.0 with vRealize Lifecycle Manager 8.0

Again as a reminder vRealize Automation 8.0 is installed, configured, managed and upgraded only through vRealize Suite Lifecycle Manager 8.0

Now as for requirements you will need :

  1. A new Hostname, IP Address and a DNS record for the new vRSLCM 8.0 appliance that the vRealize Easy Installer will be creating.
  2. To make sure that the password for the sshuser on the existing vIDM appliance is not expired.
  3. To enable root access for SSH on the existing vIDM appliance following VMware KB 2047626
  4. To Download the install and upgrade binaries for vRealize Suite 2019
  5. To Make sure you have enough storage on the new vRSLCM 8.0 appliance.

Migration Workflow

migration flow

Please note that the installation process in the video after hitting submit is fast forwarded.

NOTE (vIDM Upgrade Support )  :

  • Green-field vRealize Suite Lifecycle Manager 8.0 supports only 3.3.1 version of VMware Identity Manager to be installed or imported.
  • Older versions (2.9.2, 3.2.0, 3.2.0.1 & 3.3.0) of VMware Identity Manager will be supported only for existing vRealize Suite Lifecycle Manager instances that are being migrated to vRealize Suite Lifecycle Manager 8.0.
  • Upgrade support from older VMware Identity Manger to latest is only available if they conform to vRealize Suite Lifecycle Manager supported form-factor.
  • Versions prior to vRealize Suite Lifecycle Manager 8.0 allowed only single instance of VMware Identity Manager to be deployed with embedded connector and embedded postgresql database.
  • Upgrade for VMware Identity Manager within vRealize Suite Lifecycle Manager 8.0 to latest versions will only be supported if it conforms to the above mentioned form-factor.

Else the upgrade has to be performed outside vRealize Suite Lifecycle Manager and Once upgraded, it can any-time be re-imported by triggering Inventory Sync in vRealize Suite Lifecycle Manager 8.0

 

The End, Eh!

Helpful Links You Might Need

Resetting the admin@localhost password in vRealize Suite Lifecycle Manager

Restting root password on photon OS

vRealize Automation vRealize Suite Lifecycle Manager

Part 1: vRealize Automation 8.0 Simple Deployment with vRealize Easy Installer

On October 17th, 2019 VMware announced the next major release of vRealize Automation. it uses a modern Kubernetes based micro-services architecture and brings vRA cloud capabilities to the on-premises form factor.

What’s New

The many benefits of vRA 8.0 include:

  • Modern Platform using Kubernetes based micro-services architecture that provides
  • Easy to setup and consume multi-cloud infrastructure surface
  • Embedded vRO 8.0 Web Client and Orchestrator’s new release features
  • Deliver Infrastructure-as-Code using a declarative YAML syntax
  • Cloud Agnostic Blueprints
  • Iterative development of Blueprints
  • Self-service catalog coupled with agile governance
  • Collaboration across teams via sharing of objects
  • Kubernetes/container management
  • Deploy IPv6 workloads on dual-stack IP (IPv4/ IPv6) networks in vSphere
  • CI/CD pipeline and automated application release management
  • New Action based extensibility (ABX), which allows you to write lightweight scripts, using node.js and python.
  • Git Integration to manage all blueprints, workflows, actions and pipelines.

For more information, kindly refer to the Release Notes

vRealize Automation 8.0 is installed, configured, managed and upgraded only through vRealize Suite Lifecycle Manager 8.0 .

In the video posted below, I’am going to provide the step-by-step process of using the vRealize Easy Installer to :

  • Install vRealize Suite Lifecycle Manager 8.0
  • Deploy VMware Identity Manager 3.3.1 and register with vRealize Automation.
  • Install new instance of vRealize Automation 8.0

 

Installation Workflow

installer workflow

Please note that the installation process in the video after hitting submit is fast forwarded.

The End, Eh!

Automation and Orchestration vRealize Automation