It’s an almost déjà vu feeling from March this year, but there have been some additional critical security alerts sent out from VMware in the last week. These again are covering some privately reported vulnerabilities, not things seen out in the wild. They affect all currently supported versions of vCenter (6.5, 6.7 and 7.0) and have a critical severity and index associated. This time around it does only affect vCenter though, not the ESXi hosts themselves, which makes remediation more straightforward.
There have been new versions of vCenter (6.5 U3p, 6.7 U3n and 7.0 U2b) released this week which closes the potential vulnerability.
VMware has also documented workarounds for the vulnerabilities if these cannot be patched immediately. These workarounds disable the features of the products which are affected by the vulnerabilities. These are the plugins to vCenter for vSAN health checks, vROPs, Site Recovery Manager, vSphere Lifecycle Manager and vCloud Director Availability.
More information on the advisory and associated updated versions and workarounds can be found here – https://www.vmware.com/security/advisories/VMSA-2021-0010.html
If you are a Perfekt Managed Services customer with VMware management included, then rest assured that by the time you read this the remediation work for your environment has already started or completed.
As always with all VMware upgrades, please remember to check that your integrated product versions are compatible with the new versions before upgrading, especially with things such as VMware SRM and their 3rd party backup products.
Protecting Exchange Online with Commvault when Azure Active Directory ‘Security Defaults’ are enabled
Microsoft, in late 2019, announced Azure Active Directory ‘Security Defaults‘ as a simple one-click approach for customers to instantly harden their environment and compel stronger security standards. With it, MFA was enabled for all accounts, Basic Authentication was blocked and unattended PowerShell scripts were prevented. It also meant that there were problems for some third-party software that relied on scripted automation, most notably backup.
In Feature Release 11.20, Commvault introduced Modern Authentication support, however the unattended OAuth2 ROPC Authentication flow would fail with error code AADSTS50076 because the “User did not pass the MFA challenge (non interactive)”. This meant that customers that wanted to protect Exchange Online had to rely on journal forwarding to an Exchange Server or configure a ContentStore SMTP Server client.
The good news is that, in this article, I will be highlighting a recent hidden gem that will now allow customers to protect Exchange Online User Mailboxes. There is an additional setting, that when applied to the Exchange Mailbox Access Node, will instruct Commvault to ignore the configured Service Account for PowerShell automation. By applying this change, Commvault will just use the configured Microsoft Graph App Registrations for both administrative commands and backup.
To implement this, Commvault will need to be updated to at least Feature Release 11.22, although I recommend 11.23 as it has a more precise User Mailbox discovery. Also with Feature Release 11.23, Commvault can protect the user Archive Mailbox without using service accounts. The only Exchange data protection feature I have observed, that is not yet supported, (but will be soon) are Exchange Public Folders.
Also, at the time of writing, this is exclusive to the Commvault Command Center by creating an O365 Application. This will create an Exchange Mailbox instance of ‘Environment Type = Exchange Online (Access Through Azure Active Directory)’. Future releases of Commvault will protect the other Environment Types; ‘Exchange Online (Access Through On-Premises Active Directory)’ and ‘Exchange Hybrid’.
The hardware requirements remain identical (Index Server with Exchange Role plus an Access Node). Within Azure you will need to manually create the App Registrations that will be entered into the Command Center. The backup storage targets and RPO will be configured under a Command Center ‘Server Plan‘ and the message level protection will be configured under an ‘Office365 Plan‘ (similar to Exchange Configuration Policies).
Then you will need to add an O365 App in the Command Center.
and you either create your own App Registrations in the Azure Portal or optionally you can download the ‘CVO365CustomConfigHelper.exe’ toolkit that is available from the FR11.23 configuration.
Note that due to there being one less powerful Service Account configured in your Commvault environment, you may even want to consider this security hardened backup configuration even if you have Security Defaults disabled and are using Conditional Access Policies through your Azure Premium Subscription.
I recently implemented this for both FR11.22 and then after a few days Commvault was updated to FR11.23 to protect the Archive Mailboxes. I had already created an internal utility to extract Exchange Mailbox Job activity and was able to quickly tweak it to show how many messages were initially protected and subsequently how many more were protected once Office 365 Plan protected user Archive Mailboxes.
I have used that data in a Custom Report I created that shows the initial backup activity for my user mailbox and the subsequent protection of my Archive Mailbox. This report reads the extracted Exchange Mailbox job activity data and shows the number of messages protected with each backup job for each mailbox chosen. If you are interested in this report then feel free to reach out to us.
In this article we explore how customers who want backups, but are also reliant on Transaction Log Shipping for replication instead of SQL Server Availability Groups clusters. The Commvault Command Centre replication group removes almost all the complexity of configuring Log Shipping within the SQL Server Management Studio. DBA’s will already be familiar with the many steps within SQL Server Management Studio required to configure Log Shipping between two SQL Server Instances, as well as the configuration of a third SQL Server Instance for Log Shipping monitoring and alerts. Commvault Data Replication Technology for databases removes the majority of that configuration and the requirement for dedicated disks to hold log shipping. It is also more secure, as it doesn’t require opening SMB ports, along with 135 for T-SQL Debugging and more importantly port 1433 (SQL Server Database Engine) is not used.
Commvault is able to achieve highly granular Recovery Point Objectives through creating a checkpoint in the Transaction Log for each SQL Server Database. That is followed by a Transaction Log backup of the ‘checkpoint-ed’ contents which marks the protected Virtual Log file pages as inactive. To prevent the log disk from filling Commvault will instruct SQL Server to truncate the inactive Virtual Log file pages.
Now, Microsoft SQL Server database ‘Log Shipping’ replication uses the exact same checkpoint process as Transaction Log backups. This means that combining these two technologies is not possible because their respective truncation operations interfere with the Log Sequencing chain. Commvault addresses this issue by introducing SQL Server ‘Live Sync’ backup and replication. Transaction Log backups are written to Commvault Storage and the Commvault replays the Transaction Logs onto the destination SQL Server, whilst maintaining the SQL Server Transaction Log Sequence Chain integrity. This approach also means that the source and destination computers can be hosted on different physical, virtual or cloud hardware.
By installing and registering Commvault Intelligent Data Agent (iDataAgent) software on the client Commvault uses Microsoft SQL Server VDI API’s to automate the backup, export, restore and replication of both the Databases to a destination SQL Server. If you are not using the Commvault SQL Server iDataAgent, then before installing, you will need to meet these system requirements and check if your Commvault license supports LiveSync Data Replication. If you are certain your license does not cover LiveSync Data Replication, then you can review the most recent information about the Commvault Licensing program here and get in touch with your Account Manager to help you make the transition.
Once the backup and replication of SQL Server has been configured within Commvault. The Commvault Replication Monitor dashboard shows the health the of the replication group. If there are any failures Commvault will alert and send notifications. So effectively we will be using the central Commvault CommServe to also provide the Transaction Log Shipping Monitoring capability.
Step-by-Step how to create a LiveSync Replication Group
Create a SQL Server Subclient with the Source Databases you want to backup and LiveSync and associate it to a Command Center Plan.
Create a Replication Group
Give the Replication group a name and choose the SQL Server client, Instance and the Database(s) that will be replicated.
Choose the destination Microsoft SQL Server client and Instance.
Choose where you want to write the database to, sync delay and the Recovery Type. In this example, the source and destination will use the same name and local path. The Recovery Type “No Recovery” will leave the destination database in a write-only restoring state.
Replication group is created immediately and replication will immediately follow the backup of the databases in the replication group.
The status this SQL Server LiveSync is tracked in the Replication Monitor.
Commvault then creates the target Database as per the Recovery Type chosen previously.
Note that connections to the destination SQL Server Database are rejected whilst it is in restoring mode. This is because it is effectively in Single-User mode with exclusive access to facilitate the replaying of the Logs Shipped via the LiveSync.
Now that the Replication Group has been configured, any changes written to Source Database will be periodically synchronised to the destination. To demonstrate, I have added data into new tables.
USE AdventureWorksDW2017; CREATE TABLE TestTable (FirstName VARCHAR(100), LastName VARCHAR(100)); INSERT INTO TestTable (FirstName, LastName) SELECT FirstName, LastName FROM DimEmployee; (296 rows affected) CREATE TABLE SalesOrderTable ([SalesOrderNumber] VARCHAR(100)); INSERT INTO SalesOrderTable ([SalesOrderNumber]) SELECT [SalesOrderNumber] FROM [FactInternetSales]; (60398 rows affected)
After the next backup and synchronisation
We disable the LiveSync
With the replication disabled, the Destination Database needs to be taken out of NORECOVERY mode and put into RECOVERY mode.
We can now see that the destination database can be read and the new tables have been shipped.
Step-by-Step how to Failback of a replicated database.
At the time of writing, there is no Failover Utility for replicated backups from this iDataAgent. So we simply configure the LiveSync in reverse.
Firstly, let’s add some data to be replicated back
USE AdventureWorksDW2017; CREATE TABLE SalesOrderTable2 ([SalesOrderNumber] VARCHAR(100)) ; INSERT INTO SalesOrderTable2 ([SalesOrderNumber]) SELECT [SalesOrderNumber] FROM SalesOrderTable; (60398 rows affected)
Create a new Subclient or use the default.
Choose the database(s) and the Command Center Plan.
Configure a new Replication Group
The previous destination content is now the source
The previous source instance is now the destination instance
Typically, your DBA team would drop the original source Database to avoid having to recreate connection strings used by the applications connecting to the database. However, in this instance I have chosen to write to let Commvault create a new Database.
Then choose the Recovery Type. Here I have chosen “Stand by” mode which makes the database read-only whenever there is a break from replaying shipped logs. When the database is ready to replay the shipped logs, all users are forcibly disconnected from reading database to briefly turn the database into NORECOVERY mode whilst the replay logs can commence.
Note that by using Standby mode you need to provide a path where the Undo Logs be written to. After the Live Sync operation is complete, the SQL Server Agent use the data from the undo file and the transaction log to continue restoring the incomplete transactions. After the restore completes, the undo file will be re-written with any transactions that are incomplete at that point.
Once the replication has completed
We can see that our database is in Standby / Read-Only mode
And we can see that the database tables can be queried.
There we go, it’s that easy! Even without a Failover Utility (like CommServe LiveSync and Virtualisation “Failover Groups”).
Configuring the LiveSync Replication from the CommCell Console.
Configuring the replication group is easiest from the Command Center, but those options are also available within the classic CommCell Console.
LiveSync Configuration is performed at the Instance level.
The timing for when the replication runs is configured via a standalone “Site Replication” schedule.
By default the replication occurs immediately after the backup
The Site Replication Schedule also contains the client, instance and database configuration.
Also, the Advanced Copy Precedence dialog presents a very useful feature that will allow for a DASH/Aux Copy to complete before restoring from a non-primary copy.
Under that configuration, Commvault will perform granular transaction log backups, make auxiliary copies, replay the replicated logs into the destination database, and do all the monitoring/alerts all through a simple HTML5 management interface.
For more information, please reach out to us or check out https://documentation.commvault.com and its “Scheduling Transaction Log Restores on a Standby SQL Server” whitepaper.
As you may have heard there have been some critical security alerts sent out from VMware in the last week. These are covering some privately reported vulnerabilities, not things seen out in the wild (yet). They affect all supported versions of vCenter and ESXi released before November/December 2020 and have a critical severity and index associated, some of the highest recorded yet.
There have been new versions of vCenter (6.5 U3n, 6.7 U3l and 7.0 U1c) released in Nov/Dec 2020 which aren’t affected and a patch released for ESXi (ESXi70U1c-17325551, ESXi670-202102401-SG and ESXi650-202102101-SG depending on ESXi version) which are recommended to be installed ASAP via Update Manager.
VMware has also documented workarounds for the vulnerabilities if these cannot be patched immediately. These workarounds disable the features of the products which are affected by the vulnerabilities. These are the vROPs plugin to vCenter (whether or not vROPS is being used) and the CIM hardware reporting in ESXi.
More information on the advisory and associated updated versions and workarounds can be found here – https://www.vmware.com/security/advisories/VMSA-2021-0002.html
As always with all VMware upgrades, please remember to check that your integrated product versions are compatible with the new versions before upgrading, especially with things such as VMware SRM and their 3rd party backup products.
For those experienced with Commvault will, no doubt, have seen the HTML5 ‘Command Center’ getting royal attention every quarter. Gradually it has been transformed into the Commvault’s Crown Jewel, so much so that at times one could be forgiven for losing sight of some uncut diamonds in the Java CommCell Console, that hopefully will be fully migrated into the Command Center in future updates.
In this blog we look at an often-overlooked feature in Commvault, called Monitoring Policies, then show how you can access the Monitoring Policy Dashboards in the Command Center, and finally reflect on why on earth you are not using these gems right now!
Monitoring Policies are categorised under Log, Activity and System, but they all can be collected under a single SOLR IndexStore.
Commvault, I feel, have heavily overstated requirements for the Index Server. This could frighten off customers wanting to take full advantage of what the product can deliver. In our environment, I have had no problems using a dedicated Server with 20GB RAM and we’ve hardly touched the 550GB disk allocated for the Indexes. This is well below the requirements of 64GB RAM and 2TB SSD for the Index Directory, but you may need to scale up should you fully embrace the Monitoring Policies.
The dedicated monitoring server will need the Index Store Package loaded onto the registered Commvault Client. Once the Index Store Package is installed, you will need to ‘Add an Index Server to your CommCell Environment’ and include the ‘Log Monitoring’ Role and then assign the server with the Index Store package as the Node, along with the directory to store the Log Monitoring indexes. You may need to wait up to 30 minutes for it to fully prep the Apache SOLR, but if it does not come up then it could be because port 20000 is unreachable from the CommServe or because you have not allocated enough RAM to the client. If you want to tune the amount of memory allocated then you can follow the instructions here.
Once you have set that up, you just choose the Commvault Clients that you want to monitor, against the Policy Templates shown here.
It’s pretty straight forward, choose the Monitoring Policy Type
Then the Monitoring Type
Give it a name
Choose the Clients and/or Client Groups
Choose the Index Server and the retention for this monitoring component.
Specify the Schedule Details
Review your Configuration, press Finish and repeat for all the Policy Types.
Once you feel like you have set up all the policies and given enough time for the schedules to collect the data you can now pop into the Command Center to see the dashboards.
From here you can choose either the Log Monitoring Policies or System Monitoring.
The Log Monitoring feature is very straight forward and can be very useful when troubleshooting clients without having to pull the logs from the client and has some of the most useful log filtering features of Commvault’s fantastic troubleshooting application GxTail.
In our environment I was able to centrally pull the Commvault Logs for all clients, with the exception of the Edge Clients where their Commvault instance does not come with the full File System Agent. These clients only come with File System Core, which is a bit of a bummer. I have raised a CMR with Commvault to see if this can be incorporated with a future release.
Now you may be familiar with the Infrastructure Load report found in the Command Center which reports on System Resources (CPU/RAM usage), however, it is the Commvault System Monitoring Policy feature discussed below that is the reason for this blog. The hidden gemstones under here will currently require you to apply a little elbow grease to cut and polish, in order to make them shine.
In our environment running 11.21.15, I found that the System Monitoring was logging performance statistics without error but many of the dashboards would return errors like these.
It is possible that many have tried and gotten to this point and been dismayed, so I did some research into what was going on. It was apparent that the dashboards had slight errors in the way it was querying SOLR DB facets. For example, the query ‘graphtype timechart avg(cpu) by processname’ worked when changed to ‘graphtype timechart avg(progress_CPU) by processname’, and I found that these System Monitoring Dashboard queries requiring some attention were pre-cooked within a stored procedure inside the CommServeDB. When I raised this with Commvault Support, they very kindly compiled a Diagnostic Hotfix (v11SP21_Available_Diag2056_WinX64) that updated the CommServe and now the Media Agent Data Transferred widget needs one last touch up from development. So, if you are running a similar build to 11.21.15 and want to see a performance dashboard like this, then reach out to Commvault Support. Note that when you have the diagnostic patch loaded and then update to 11.22, as we have done, some of the dashboards will return empty graphs with the reason “No data found”.
Looking at the Dashboard below, suddenly we have an easy-to-use visual insight into how your Commvault processes are performing. If you may have been frustrated at troubleshooting an overnight or weekend performance problem through log bundles, I’m sure you will agree that this#DataIsBeautiful. I especially like the fact that these Monitoring Policies can provide a lot of information about what is happening in your environment without having to license any third party software. Certainly, it is very reassuring that I now have historical performance and log data that is Commvault specific which I can use to compare against, should we need to investigate issues on the monitored servers.
Or you can click each graph and drill down into a custom date range to analyse the Commvault Process level statistics.
In summary, at first System Monitoring policies may, unfairly be seen as forgotten diamonds in the rough, but by putting in a bit of effort you can transform them into shiny diamonds that shed light into your environment. Hopefully soon we will see a product update that will fully embrace this fantastic feature within the Command Center for both configuration and dashboard reports.
This week a significant update for Commvault was released within Feature Release 11.22 that will be of help to every customer protecting Exchange Online with Commvault, Point-In-Time Mailbox Recovery. This capability is not provided by Microsoft natively and they have said that “Point in time restoration of mailbox items is out of scope for the Exchange Online service”. This lack of a native capability has meant 3rd Party developers have had to work very hard into developing a fast, scalable and indexed backup mailbox solution. The technology backbone Commvault chose was a logical one – Apache Lucene SOLR which has long been used for File/Email Content Indexing, System Monitoring and other Analytics features. For many small-mid sized organisations using just one Index Server and Access Node, the performance when using Feature Release 11.20+ Modern Authentication is excellent, with download throughput figures of up to 2TB/day not uncommon.
However, despite the feature rich nature of the Commvault Exchange Mailbox Agent there was no true point-in-time restore technology. The biggest technical challenge to overcome, was that previously, the only way Commvault could perform a Point-In-Time recovery was to restore the SOLR Index Backup and replay it into a New Instance (or Instances) of an Index Server. Commvault Support have had this process down pat to help out the customers who may have been understandably daunted by this procedure, but there had to be a better way right? Well thankfully, the process of manually creating Index Servers and replaying the Index Backups will soon be no more.
Feature Release 11.22, which at the time of writing is in “Technical Preview” (General Availability status will be in February 2021) has solved this problem by changing the way the SOLR Index does the “Sharding” process. What is Sharding, and why do it? Well, it’s Lucene SOLR’s way of Scaling Out and your point-in-time results are cached into a new SOLR Core. Commvault now creates an Exchange Mailbox Recovery Point from just the User Mailbox you want to restore and the data is sharded off into a new SOLR core that will stay around for 30 days or until you delete the recovery point.
Now at the time of writing, the Point-In-Time recovery still restores messages deleted by the user. The Restore Mailbox Messages UI does give you the option to Include/Exclude Deleted messages, but in my testing that does not work yet. Also, during my testing if mail messages were backed up in one folder to then be backed up after the email was moved into a different folder, then the mailbox restore would restore both messages. These were the results in my test lab and test O365 environment, so your mileage may vary in your favour; however I’d probably recommend holding out for now with this new Commvault Agent as it is still under Technical Preview classification. I can confirm that Commvault is correctly recording the common Message Identifiers in the Indexes each time an email message has been moved, so we can be confident that this will be resolved without having to re-back up the data protected under this client.
Here is are some samples of how point in time recovery is performed. Note: this new feature is exclusively in the HTML5 Commvault Command Center.
First you will need at least some backups before your test.
Once the backup is complete, click the Client name.
Click on the Mailbox you want to restore.
A calendar full of all the recovery points will be visible for each day after you click on the date. In this instance I have chosen the backup at 5:33PM (Job Id 21) and clicked Create recovery point
Confirm the Recovery Point creation.
Locate the Recovery Points by clicking on the Recovery Points tab for the client, then tick the mailbox and click Restore > Restore Mailbox (chosen here) or Restore Messages
For in-place restores, all the messages protected up until this recovery point will be restored in place. Note: whilst at the time of writing, Deleted and Moved messages will be restored as copies of the original message; you will not get a double up of the message if it exists in the same folder.
Or, Restore the data to another Mailbox and Folder. Note: Commvault out of Mailbox Restores will recover all the messages into sub-folders underneath the folder you specify.
Long-time users of the Commvault CommCell Console whom have managed their environment off host would have been using the Java Web Start application. It was simple and easy. Just enter http://yourcommserve/console and open ‘galaxy.jnlp’. Unfortunately that galaxy is now far, far away. The ‘Java Web Start’ and ‘Java Plug-in’ have been deprecated since March 2019, and the continued use of Java SE requires an Oracle Subscription. So what is the best way to connect to the CommCell Console, without having to Remote Desktop into the CommServe each time?
Firstly, I cannot stress this enough – do not install the Commvault installation software to install the Commvault CommCell Console package onto your desktop. The risk here is that any patching of the CommServe also means the desktops must be patched at the exact same time which cannot be guaranteed, and using it this way can cause fatal errors including data loss.
The best way is to use the netx.jar bootstrap file. The simplest way to get the netx.jar is download it directly from the Commvault Cloud, and conveniently you don’t need to have a Maintenance Advantage login to download.
You can also elect to download the netx.jar file directly from your CommServe Web Server https://yourcommserve.company.local/console/netx.jar. If you are using Chromium based browser, you will likely be unable download the netx.jar file if your CommServe Web Server is using Self-Signed Certificates. If you have direct access to the CommServe, then you can copy the file located at “%CV_Instance001%\..\GUI\netx.jar” (e.g. C:\Program Files\Commvault\ContentStore\GUI\netx.jar”).
Now a number of times I’ve seen instances where people are using netx.jar to still be launching with Java 8 SE. At the time of writing it may seem like the Console works but you may run into Console Related Errors or expose yourself to the kind of risks mentioned earlier.
What you should be using is OpenJDK 11 instead. For almost two years Commvault CommCell Console has been compiled for Java 11. Currently OpenJDK 11.0.7 gets installed with the CommCell Console and is upgraded periodically with new Feature Releases of Commvault. OpenJDK 11 can be downloaded here thanks to the fantastic contributors to the AdoptOpenJDK community.
You can download the JDK as an MSI, but I prefer to download the zipped Binary instead because I would rather choose at runtime which Java Version I want to run. In this example the netx.jar and extracted zipped JRE are in my ‘downloads’ folder (don’t judge me) and created a shortcut on my desktop to.
"%HOMEPATH%\Downloads\OpenJDK_11.0.7\bin\java" -jar "%HOMEPATH%\Downloads\netx.jar"
From here, just enter the CommServe Hostname and click OK.
Then wait for Java to run a few commands
and within a few seconds you are prompted to log into your CommCell Console.
Whilst not as simple and convenient as the old Java Web Start way, it is the safest way of running the console remotely without having to Remote Desktop into Commvault Servers.
Back in August, I discovered an issue that impacted Commvault native dump backups of Amazon RDS SQL Server and will affect all users who are backing up these databases in a time-zone other than UTC+0. This blog goes into some detail about this problem and why you must be careful how you restore your backups. A Diagnostic Fix is currently available on request by Commvault and will soon be mainstream come November 2020 Maintenance Pack, but users need to be aware that this fix will not retrospectively resolve your past Amazon RDS SQL Server backups.
How to reproduce the issue
The steps described here are the for the Commvault CommCell Console, but the reproduction steps are also just as relevant to the Command Center.
Attempt a Browse and Restore by Job
Click View Content
Error: There is no data to restore. Verify that the correct dates have been entered.
So what happens if you really need to restore that backup and you browse and restore by Time Range?
Our first job backup here finished at 7:44:52PM on the 26th August, 2020 and I have chosen a restore by End Time that is 9:59 ahead of when the backup occurred (9:59 = +10 hours – 1 minute).
However if I repeat the Browse and Restore by End Time, but choose a time 10:01 ahead of the backup
And voila, Commvault was able to retrieve the list of SQL Server database backups from the Job!
And the problem is?
The “There is no data to restore. Verify that the correct dates have been entered” error only appears if there are no backups on or before the Browse by Date Time. Whenever you have an error message come up, you clearly know that you have to make corrective action. However, the problem here is that when you browse and restore this way, it is quite likely that you will either restore an older or newer backup; and the backup operator will not even know until the DBA discovers the error.
So a restore by Date and Time, requires the Backup Operator to do a time calculation. For many customers that work in a single time zone, this may be quite straight forward. However extra care must be taken when restoring databases that could be in a different zones.
The Good, the Bad and the Ugly
The good news is that there is a Diagnostic Hotfix from Commvault that needs to be installed on the Access Node, and I can confirm that it was prepared for at least Commvault FR11.20.17 and for FR11.21.5. Contact Commvault support if you just want this Diagnostic Fix, or install the November 2020 Maintenance Pack across your CommCell. This will re-enable you to do a Browse and Restore by Job without having to browse by date.
The bad is that it does not retrospectively fix the job history. Why? Sadly it is just too risky for Commvault to create an Update script to the sqlDBBackupInfo table to update the Database Backup times to reflect the true timestamp because there is no safe way to do it globally for all time-zones.
The ugly means your backup operators need to be aware when this patch is applied so they know which dates a Browse and Restore by Job will work and also when they must restore by providing a date range as described in this blog.
THE ROAD TO AI WILL BE PAVED WITH GOLD
We live at a time of rapid and radical change, which is both exciting and scary, to varying degrees. It seems like every day we read about a new app, platform or disruptive, digitised service that would dramatically alter our lives for the better. And, as it is with all innovations, some do, but most don’t.
Early each January, the Las Vegas Consumer Electronics Show is the only place to be if you want to discover what new inventions their sales-pattering inventors predict you’ll be using sooner than you could possibly imagine.
By now, most of us have heard of AI, even if some of us don’t completely understand what it is and does. We believe there’s no better time than today to learn about the full capabilities of AI and how it can make your organisation far more efficient.
As a company that works with AI applications on a daily basis, we predict that AI’s impact on our professional lives is at a tipping point and we’re likely to see exponential growth in its quantifiable, understandable usefulness. We’ll go even further and predict that the way AI captures our data, interprets its relevant information and then implements new and improved processes will add value to many of Australia’s visionary industries.
Perfekt has harnessed the power of AI to provide an Analytics Platform as a Service to our clients. This enables these companies to maximize the inherent value AI has to offer, as they quickly move along their own pathway to a Data-Driven Evolution. So, what do AI-Powered Predictive Analytics look like in practice?
When AI is deployed correctly, it results in a data maturity development process that iteratively adds business value at each point of your operational evolution. Within today’s digitised organisations, there are often multiple operational systems, many of which create their own data siloes. (And the greater the size of your operation, the more likely it is you’ll have more of these data siloes, with varying degrees of data quality). To make full use of the data within these siloes – and to check the integrity and trustworthiness of the information – the data needs to be cleansed and then blended with all other data to provide you with a de-siloed and holistic operational view.
This in itself is a massive boost to productivity and, of course, your profitability because accurate human analysis of your whole operation becomes possible for the first time. The level of detail this process provides, and improves upon with every iteration, delivers valuable waypoints in the Data-Driven Evolution towards AI.
We expect your company is already using some advanced data science models in an effort to develop a range of business improvements. These models include: predictive analytics; event correlation; root cause analysis; most efficient path definition; deep neural networks; digital twins and many, many more.
If you’re using a combination of these data science models, it means you’re already able to automate operational efficiency improvement through an integrated collection of proven and tested models. This scenario is the closest we have to Artificial Intelligence today, but this can be challenging to implement. The AI-Powered Analytics Platform as a Service that Perfekt offers, seeks to mitigate these challenges and manage the process towards true AI in a timely and efficient manner.
If you’re interested in learning more about the power of AI, our recent whitepaper demonstrates how a proven Analytics Platform as a Service unlocks the value in your data; boosts your company’s efficiency, enhances workplace safety and futureproofs your business.
No matter which point you are on in your own Data-Driven Evolution, our prediction for the future, is that Perfekt will help you get your company to the next level, which you’ll see is a street paved with gold.
Mining is at a crossroads, facing intense competition, margin pressure and a highly volatile marketplace. Predictive data analytics that delivers real time insights to monitor asset performance, help reduce unplanned maintenance and increase asset lifetime, cutting cost and improving productivity are playing an increasingly important role in future-proofing forward-thinking mining companies.
Perfekt’s Predictive Analytics Platform as a Service offers a high value, low fuss and proven approach to real time data analytics that can be deployed on a project-by-project pay-as-you-go basis.
Discover how a Predictive Analytics Platform as a Service can:
· Accelerate innovation
· Manage complexity
· Reduce risk
· Help control cost
Perfekt is a highly credentialed technology specialist with a strong track record and rigorous approach to data engineering, leveraging its data integration and analytics platforms to create cost effective and impactful Predictive Analytics Platform as a Service solutions for the mining sector.