
Simplifying the backup and Log Shipping of SQL Server via Commvault LiveSync Replication
In this article we explore how customers who want backups, but are also reliant on Transaction Log Shipping for replication instead of SQL Server Availability Groups clusters. The Commvault Command Centre replication group removes almost all the complexity of configuring Log Shipping within the SQL Server Management Studio. DBA’s will already be familiar with the many steps within SQL Server Management Studio required to configure Log Shipping between two SQL Server Instances, as well as the configuration of a third SQL Server Instance for Log Shipping monitoring and alerts. Commvault Data Replication Technology for databases removes the majority of that configuration and the requirement for dedicated disks to hold log shipping. It is also more secure, as it doesn’t require opening SMB ports, along with 135 for T-SQL Debugging and more importantly port 1433 (SQL Server Database Engine) is not used.
Commvault is able to achieve highly granular Recovery Point Objectives through creating a checkpoint in the Transaction Log for each SQL Server Database. That is followed by a Transaction Log backup of the ‘checkpoint-ed’ contents which marks the protected Virtual Log file pages as inactive. To prevent the log disk from filling Commvault will instruct SQL Server to truncate the inactive Virtual Log file pages.
Now, Microsoft SQL Server database ‘Log Shipping’ replication uses the exact same checkpoint process as Transaction Log backups. This means that combining these two technologies is not possible because their respective truncation operations interfere with the Log Sequencing chain. Commvault addresses this issue by introducing SQL Server ‘Live Sync’ backup and replication. Transaction Log backups are written to Commvault Storage and the Commvault replays the Transaction Logs onto the destination SQL Server, whilst maintaining the SQL Server Transaction Log Sequence Chain integrity. This approach also means that the source and destination computers can be hosted on different physical, virtual or cloud hardware.
By installing and registering Commvault Intelligent Data Agent (iDataAgent) software on the client Commvault uses Microsoft SQL Server VDI API’s to automate the backup, export, restore and replication of both the Databases to a destination SQL Server. If you are not using the Commvault SQL Server iDataAgent, then before installing, you will need to meet these system requirements and check if your Commvault license supports LiveSync Data Replication. If you are certain your license does not cover LiveSync Data Replication, then you can review the most recent information about the Commvault Licensing program here and get in touch with your Account Manager to help you make the transition.
Once the backup and replication of SQL Server has been configured within Commvault. The Commvault Replication Monitor dashboard shows the health the of the replication group. If there are any failures Commvault will alert and send notifications. So effectively we will be using the central Commvault CommServe to also provide the Transaction Log Shipping Monitoring capability.
Step-by-Step how to create a LiveSync Replication Group
Create a SQL Server Subclient with the Source Databases you want to backup and LiveSync and associate it to a Command Center Plan.
Create a Replication Group
Give the Replication group a name and choose the SQL Server client, Instance and the Database(s) that will be replicated.
Choose the destination Microsoft SQL Server client and Instance.
Choose where you want to write the database to, sync delay and the Recovery Type. In this example, the source and destination will use the same name and local path. The Recovery Type “No Recovery” will leave the destination database in a write-only restoring state.
Replication group is created immediately and replication will immediately follow the backup of the databases in the replication group.
The status this SQL Server LiveSync is tracked in the Replication Monitor.
Commvault then creates the target Database as per the Recovery Type chosen previously.
Note that connections to the destination SQL Server Database are rejected whilst it is in restoring mode. This is because it is effectively in Single-User mode with exclusive access to facilitate the replaying of the Logs Shipped via the LiveSync.
Now that the Replication Group has been configured, any changes written to Source Database will be periodically synchronised to the destination. To demonstrate, I have added data into new tables.
USE AdventureWorksDW2017; CREATE TABLE TestTable (FirstName VARCHAR(100), LastName VARCHAR(100)); INSERT INTO TestTable (FirstName, LastName) SELECT FirstName, LastName FROM DimEmployee; (296 rows affected) CREATE TABLE SalesOrderTable ([SalesOrderNumber] VARCHAR(100)); INSERT INTO SalesOrderTable ([SalesOrderNumber]) SELECT [SalesOrderNumber] FROM [FactInternetSales]; (60398 rows affected)
After the next backup and synchronisation
We disable the LiveSync
With the replication disabled, the Destination Database needs to be taken out of NORECOVERY mode and put into RECOVERY mode.
We can now see that the destination database can be read and the new tables have been shipped.
Step-by-Step how to Failback of a replicated database.
At the time of writing, there is no Failover Utility for replicated backups from this iDataAgent. So we simply configure the LiveSync in reverse.
Firstly, let’s add some data to be replicated back
USE AdventureWorksDW2017; CREATE TABLE SalesOrderTable2 ([SalesOrderNumber] VARCHAR(100)) ; INSERT INTO SalesOrderTable2 ([SalesOrderNumber]) SELECT [SalesOrderNumber] FROM SalesOrderTable; (60398 rows affected)
Create a new Subclient or use the default.
Choose the database(s) and the Command Center Plan.
Configure a new Replication Group
The previous destination content is now the source
The previous source instance is now the destination instance
Typically, your DBA team would drop the original source Database to avoid having to recreate connection strings used by the applications connecting to the database. However, in this instance I have chosen to write to let Commvault create a new Database.
Then choose the Recovery Type. Here I have chosen “Stand by” mode which makes the database read-only whenever there is a break from replaying shipped logs. When the database is ready to replay the shipped logs, all users are forcibly disconnected from reading database to briefly turn the database into NORECOVERY mode whilst the replay logs can commence.
Note that by using Standby mode you need to provide a path where the Undo Logs be written to. After the Live Sync operation is complete, the SQL Server Agent use the data from the undo file and the transaction log to continue restoring the incomplete transactions. After the restore completes, the undo file will be re-written with any transactions that are incomplete at that point.
Once the replication has completed
We can see that our database is in Standby / Read-Only mode
And we can see that the database tables can be queried.
There we go, it’s that easy! Even without a Failover Utility (like CommServe LiveSync and Virtualisation “Failover Groups”).
Configuring the LiveSync Replication from the CommCell Console.
Configuring the replication group is easiest from the Command Center, but those options are also available within the classic CommCell Console.
LiveSync Configuration is performed at the Instance level.
The timing for when the replication runs is configured via a standalone “Site Replication” schedule.
By default the replication occurs immediately after the backup
The Site Replication Schedule also contains the client, instance and database configuration.
Also, the Advanced Copy Precedence dialog presents a very useful feature that will allow for a DASH/Aux Copy to complete before restoring from a non-primary copy.
Under that configuration, Commvault will perform granular transaction log backups, make auxiliary copies, replay the replicated logs into the destination database, and do all the monitoring/alerts all through a simple HTML5 management interface.
For more information, please reach out to us or check out https://documentation.commvault.com and its “Scheduling Transaction Log Restores on a Standby SQL Server” whitepaper.
VMware Critical Vulnerabilities (March 2021)
As you may have heard there have been some critical security alerts sent out from VMware in the last week. These are covering some privately reported vulnerabilities, not things seen out in the wild (yet). They affect all supported versions of vCenter and ESXi released before November/December 2020 and have a critical severity and index associated, some of the highest recorded yet.
There have been new versions of vCenter (6.5 U3n, 6.7 U3l and 7.0 U1c) released in Nov/Dec 2020 which aren’t affected and a patch released for ESXi (ESXi70U1c-17325551, ESXi670-202102401-SG and ESXi650-202102101-SG depending on ESXi version) which are recommended to be installed ASAP via Update Manager.
VMware has also documented workarounds for the vulnerabilities if these cannot be patched immediately. These workarounds disable the features of the products which are affected by the vulnerabilities. These are the vROPs plugin to vCenter (whether or not vROPS is being used) and the CIM hardware reporting in ESXi.
More information on the advisory and associated updated versions and workarounds can be found here – https://www.vmware.com/security/advisories/VMSA-2021-0002.html
As always with all VMware upgrades, please remember to check that your integrated product versions are compatible with the new versions before upgrading, especially with things such as VMware SRM and their 3rd party backup products.

A look into Commvault Monitoring Policies
For those experienced with Commvault will, no doubt, have seen the HTML5 ‘Command Center’ getting royal attention every quarter. Gradually it has been transformed into the Commvault’s Crown Jewel, so much so that at times one could be forgiven for losing sight of some uncut diamonds in the Java CommCell Console, that hopefully will be fully migrated into the Command Center in future updates.
In this blog we look at an often-overlooked feature in Commvault, called Monitoring Policies, then show how you can access the Monitoring Policy Dashboards in the Command Center, and finally reflect on why on earth you are not using these gems right now!
Monitoring Policies are categorised under Log, Activity and System, but they all can be collected under a single SOLR IndexStore.
Commvault, I feel, have heavily overstated requirements for the Index Server. This could frighten off customers wanting to take full advantage of what the product can deliver. In our environment, I have had no problems using a dedicated Server with 20GB RAM and we’ve hardly touched the 550GB disk allocated for the Indexes. This is well below the requirements of 64GB RAM and 2TB SSD for the Index Directory, but you may need to scale up should you fully embrace the Monitoring Policies.
The dedicated monitoring server will need the Index Store Package loaded onto the registered Commvault Client. Once the Index Store Package is installed, you will need to ‘Add an Index Server to your CommCell Environment’ and include the ‘Log Monitoring’ Role and then assign the server with the Index Store package as the Node, along with the directory to store the Log Monitoring indexes. You may need to wait up to 30 minutes for it to fully prep the Apache SOLR, but if it does not come up then it could be because port 20000 is unreachable from the CommServe or because you have not allocated enough RAM to the client. If you want to tune the amount of memory allocated then you can follow the instructions here.
Once you have set that up, you just choose the Commvault Clients that you want to monitor, against the Policy Templates shown here.
It’s pretty straight forward, choose the Monitoring Policy Type
Then the Monitoring Type
Give it a name
Choose the Clients and/or Client Groups
Choose the Index Server and the retention for this monitoring component.
Specify the Schedule Details
Review your Configuration, press Finish and repeat for all the Policy Types.
Once you feel like you have set up all the policies and given enough time for the schedules to collect the data you can now pop into the Command Center to see the dashboards.
From here you can choose either the Log Monitoring Policies or System Monitoring.
The Log Monitoring feature is very straight forward and can be very useful when troubleshooting clients without having to pull the logs from the client and has some of the most useful log filtering features of Commvault’s fantastic troubleshooting application GxTail.
In our environment I was able to centrally pull the Commvault Logs for all clients, with the exception of the Edge Clients where their Commvault instance does not come with the full File System Agent. These clients only come with File System Core, which is a bit of a bummer. I have raised a CMR with Commvault to see if this can be incorporated with a future release.
Now you may be familiar with the Infrastructure Load report found in the Command Center which reports on System Resources (CPU/RAM usage), however, it is the Commvault System Monitoring Policy feature discussed below that is the reason for this blog. The hidden gemstones under here will currently require you to apply a little elbow grease to cut and polish, in order to make them shine.
In our environment running 11.21.15, I found that the System Monitoring was logging performance statistics without error but many of the dashboards would return errors like these.
It is possible that many have tried and gotten to this point and been dismayed, so I did some research into what was going on. It was apparent that the dashboards had slight errors in the way it was querying SOLR DB facets. For example, the query ‘graphtype timechart avg(cpu) by processname’ worked when changed to ‘graphtype timechart avg(progress_CPU) by processname’, and I found that these System Monitoring Dashboard queries requiring some attention were pre-cooked within a stored procedure inside the CommServeDB. When I raised this with Commvault Support, they very kindly compiled a Diagnostic Hotfix (v11SP21_Available_Diag2056_WinX64) that updated the CommServe and now the Media Agent Data Transferred widget needs one last touch up from development. So, if you are running a similar build to 11.21.15 and want to see a performance dashboard like this, then reach out to Commvault Support. Note that when you have the diagnostic patch loaded and then update to 11.22, as we have done, some of the dashboards will return empty graphs with the reason “No data found”.
Looking at the Dashboard below, suddenly we have an easy-to-use visual insight into how your Commvault processes are performing. If you may have been frustrated at troubleshooting an overnight or weekend performance problem through log bundles, I’m sure you will agree that this#DataIsBeautiful. I especially like the fact that these Monitoring Policies can provide a lot of information about what is happening in your environment without having to license any third party software. Certainly, it is very reassuring that I now have historical performance and log data that is Commvault specific which I can use to compare against, should we need to investigate issues on the monitored servers.
Or you can click each graph and drill down into a custom date range to analyse the Commvault Process level statistics.
In summary, at first System Monitoring policies may, unfairly be seen as forgotten diamonds in the rough, but by putting in a bit of effort you can transform them into shiny diamonds that shed light into your environment. Hopefully soon we will see a product update that will fully embrace this fantastic feature within the Command Center for both configuration and dashboard reports.

Point-In-Time recovery with the Commvault Exchange Mailbox Agent
This week a significant update for Commvault was released within Feature Release 11.22 that will be of help to every customer protecting Exchange Online with Commvault, Point-In-Time Mailbox Recovery. This capability is not provided by Microsoft natively and they have said that “Point in time restoration of mailbox items is out of scope for the Exchange Online service”. This lack of a native capability has meant 3rd Party developers have had to work very hard into developing a fast, scalable and indexed backup mailbox solution. The technology backbone Commvault chose was a logical one – Apache Lucene SOLR which has long been used for File/Email Content Indexing, System Monitoring and other Analytics features. For many small-mid sized organisations using just one Index Server and Access Node, the performance when using Feature Release 11.20+ Modern Authentication is excellent, with download throughput figures of up to 2TB/day not uncommon.
However, despite the feature rich nature of the Commvault Exchange Mailbox Agent there was no true point-in-time restore technology. The biggest technical challenge to overcome, was that previously, the only way Commvault could perform a Point-In-Time recovery was to restore the SOLR Index Backup and replay it into a New Instance (or Instances) of an Index Server. Commvault Support have had this process down pat to help out the customers who may have been understandably daunted by this procedure, but there had to be a better way right? Well thankfully, the process of manually creating Index Servers and replaying the Index Backups will soon be no more.
Feature Release 11.22, which at the time of writing is in “Technical Preview” (General Availability status will be in February 2021) has solved this problem by changing the way the SOLR Index does the “Sharding” process. What is Sharding, and why do it? Well, it’s Lucene SOLR’s way of Scaling Out and your point-in-time results are cached into a new SOLR Core. Commvault now creates an Exchange Mailbox Recovery Point from just the User Mailbox you want to restore and the data is sharded off into a new SOLR core that will stay around for 30 days or until you delete the recovery point.
Now at the time of writing, the Point-In-Time recovery still restores messages deleted by the user. The Restore Mailbox Messages UI does give you the option to Include/Exclude Deleted messages, but in my testing that does not work yet. Also, during my testing if mail messages were backed up in one folder to then be backed up after the email was moved into a different folder, then the mailbox restore would restore both messages. These were the results in my test lab and test O365 environment, so your mileage may vary in your favour; however I’d probably recommend holding out for now with this new Commvault Agent as it is still under Technical Preview classification. I can confirm that Commvault is correctly recording the common Message Identifiers in the Indexes each time an email message has been moved, so we can be confident that this will be resolved without having to re-back up the data protected under this client.
Here is are some samples of how point in time recovery is performed. Note: this new feature is exclusively in the HTML5 Commvault Command Center.
First you will need at least some backups before your test.
Once the backup is complete, click the Client name.
Click on the Mailbox you want to restore.
A calendar full of all the recovery points will be visible for each day after you click on the date. In this instance I have chosen the backup at 5:33PM (Job Id 21) and clicked Create recovery point
Confirm the Recovery Point creation.
Locate the Recovery Points by clicking on the Recovery Points tab for the client, then tick the mailbox and click Restore > Restore Mailbox (chosen here) or Restore Messages
For in-place restores, all the messages protected up until this recovery point will be restored in place. Note: whilst at the time of writing, Deleted and Moved messages will be restored as copies of the original message; you will not get a double up of the message if it exists in the same folder.
Or, Restore the data to another Mailbox and Folder. Note: Commvault out of Mailbox Restores will recover all the messages into sub-folders underneath the folder you specify.

Opening the Commvault CommCell Console remotely
Long-time users of the Commvault CommCell Console whom have managed their environment off host would have been using the Java Web Start application. It was simple and easy. Just enter http://yourcommserve/console and open ‘galaxy.jnlp’. Unfortunately that galaxy is now far, far away. The ‘Java Web Start’ and ‘Java Plug-in’ have been deprecated since March 2019, and the continued use of Java SE requires an Oracle Subscription. So what is the best way to connect to the CommCell Console, without having to Remote Desktop into the CommServe each time?
Firstly, I cannot stress this enough – do not install the Commvault installation software to install the Commvault CommCell Console package onto your desktop. The risk here is that any patching of the CommServe also means the desktops must be patched at the exact same time which cannot be guaranteed, and using it this way can cause fatal errors including data loss.
The best way is to use the netx.jar bootstrap file. The simplest way to get the netx.jar is download it directly from the Commvault Cloud, and conveniently you don’t need to have a Maintenance Advantage login to download.
You can also elect to download the netx.jar file directly from your CommServe Web Server https://yourcommserve.company.local/console/netx.jar. If you are using Chromium based browser, you will likely be unable download the netx.jar file if your CommServe Web Server is using Self-Signed Certificates. If you have direct access to the CommServe, then you can copy the file located at “%CV_Instance001%\..\GUI\netx.jar” (e.g. C:\Program Files\Commvault\ContentStore\GUI\netx.jar”).
Now a number of times I’ve seen instances where people are using netx.jar to still be launching with Java 8 SE. At the time of writing it may seem like the Console works but you may run into Console Related Errors or expose yourself to the kind of risks mentioned earlier.
What you should be using is OpenJDK 11 instead. For almost two years Commvault CommCell Console has been compiled for Java 11. Currently OpenJDK 11.0.7 gets installed with the CommCell Console and is upgraded periodically with new Feature Releases of Commvault. OpenJDK 11 can be downloaded here thanks to the fantastic contributors to the AdoptOpenJDK community.
You can download the JDK as an MSI, but I prefer to download the zipped Binary instead because I would rather choose at runtime which Java Version I want to run. In this example the netx.jar and extracted zipped JRE are in my ‘downloads’ folder (don’t judge me) and created a shortcut on my desktop to.
"%HOMEPATH%\Downloads\OpenJDK_11.0.7\bin\java" -jar "%HOMEPATH%\Downloads\netx.jar"
From here, just enter the CommServe Hostname and click OK.
Then wait for Java to run a few commands
and within a few seconds you are prompted to log into your CommCell Console.
Whilst not as simple and convenient as the old Java Web Start way, it is the safest way of running the console remotely without having to Remote Desktop into Commvault Servers.

How to restore Amazon RDS SQL Server native dump backups within Commvault
Back in August, I discovered an issue that impacted Commvault native dump backups of Amazon RDS SQL Server and will affect all users who are backing up these databases in a time-zone other than UTC+0. This blog goes into some detail about this problem and why you must be careful how you restore your backups. A Diagnostic Fix is currently available on request by Commvault and will soon be mainstream come November 2020 Maintenance Pack, but users need to be aware that this fix will not retrospectively resolve your past Amazon RDS SQL Server backups.
How to reproduce the issue
The steps described here are the for the Commvault CommCell Console, but the reproduction steps are also just as relevant to the Command Center.
Attempt a Browse and Restore by Job
Click View Content
Error: There is no data to restore. Verify that the correct dates have been entered.
So what happens if you really need to restore that backup and you browse and restore by Time Range?
Our first job backup here finished at 7:44:52PM on the 26th August, 2020 and I have chosen a restore by End Time that is 9:59 ahead of when the backup occurred (9:59 = +10 hours – 1 minute).
Same error!
However if I repeat the Browse and Restore by End Time, but choose a time 10:01 ahead of the backup
And voila, Commvault was able to retrieve the list of SQL Server database backups from the Job!
And the problem is?
The “There is no data to restore. Verify that the correct dates have been entered” error only appears if there are no backups on or before the Browse by Date Time. Whenever you have an error message come up, you clearly know that you have to make corrective action. However, the problem here is that when you browse and restore this way, it is quite likely that you will either restore an older or newer backup; and the backup operator will not even know until the DBA discovers the error.
So a restore by Date and Time, requires the Backup Operator to do a time calculation. For many customers that work in a single time zone, this may be quite straight forward. However extra care must be taken when restoring databases that could be in a different zones.
The Good, the Bad and the Ugly
The good news is that there is a Diagnostic Hotfix from Commvault that needs to be installed on the Access Node, and I can confirm that it was prepared for at least Commvault FR11.20.17 and for FR11.21.5. Contact Commvault support if you just want this Diagnostic Fix, or install the November 2020 Maintenance Pack across your CommCell. This will re-enable you to do a Browse and Restore by Job without having to browse by date.
The bad is that it does not retrospectively fix the job history. Why? Sadly it is just too risky for Commvault to create an Update script to the sqlDBBackupInfo table to update the Database Backup times to reflect the true timestamp because there is no safe way to do it globally for all time-zones.
The ugly means your backup operators need to be aware when this patch is applied so they know which dates a Browse and Restore by Job will work and also when they must restore by providing a date range as described in this blog.

The growing need to transform government IT
Demand for new digital service experiences is creating a growing need to transform government IT. New consumption-based models for IT investment can be the perfect paradigm for digital transformation initiatives.

Carefully chartering digital transformation in healthcare
Due to scale, safety and compliance measures, healthcare providers have been understandably more cautious that other sectors when it comes to their digital transformation programs. But that might all be about to change as we enter a new paradigm for digital transformation and investment.

Secure remote network access is driving business continuity during these challenging times
Secure remote network access is driving business continuity during these challenging times. Read more here:

Did you know that data is your most strategic and profitable asset?
In today’s blog, Perfekt’s Chief Technology Officer, Dan Roitman, and Hitachi Vantara’s senior partner manager, Marc Fiala, discuss how to unleash the power of data and transform the future of your business.

Marc: So, Dan. How do you feel about being Hitachi Vantara’s only platinum partner in Australia?
Dan: It’s fantastic, Marc. You know, for Perfekt, the Hitachi Vantara partnership was a no-brainer. Like you, our team believes that data has the power not only to transform the future of business, but society as a whole. This really is a great time to be in technology and seeking others who think beyond ‘the possible’.
Marc: A lot of that possibility is down to the world’s ever-growing data streams, right?
Dan: Absolutely. Company information is growing at an exponential rate. Within that data, is the key to discovering new and better ways of doing things. The story of every company’s past and future is written in its data. Their efficiencies, innovations, opportunities and customer experiences. Even their wins, losses and risks. Intelligent businesses are realising their data holds precious insights, which they can use to make more strategic and profitable business decisions.
Are you ready to maximise your return on data?
Together with Hitachi Vantara, Perfekt connects business, human and machine data to create Internet of Things (IoT) solutions that benefit companies and society as a whole. Leveraging machine learning and artificial intelligence, our Hitachi Vantara’s unique Data Stairway to Value helps organisations to store, protect, enrich, activate and monetise their data. If you’re ready to maximise your return on data, contact a Perfekt specialist for a free consultation or visit perfekt.com.au.