On the 8th of December, 2015 VMware released a patch to ESXi 5.5 to address the POODLE vulnerability in SSLv3. Their patch disables SSLv3 on the host altogether and only allows the more secure TLS cipher to be used instead. The patch is called ESXi550-201512101-SG and is titled “Updates esx-base” in Update Manager and could cause you problems if you upgrade before vCenter.
If you are already at the vCenter version they released at the same time (vCenter 5.5 U3b) then it is safe to upgrade the hosts to this patch level as communication will continue to work fine over TLS.
However, if you have vCenter below this latest 5.5 Update 3b level and you install the ESXi patch you will not be able to connect to the host in vCenter after the patch is installed and it’s subsequent restart. This is because vCenter will still be trying to communicate to the host via SSLv3 and the host now has it disabled.
If you do install the patch you have two options to enable communication again, either you can re-enable SSLv3 on the host (following the procedure here http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2139396#hostd) or you can upgrade vCenter to 5.5 U3b (the preferred method).
If you are reading this before you install this patch then planning to upgrade to vCenter version 5.5 U3b would be the ideal solution. There are also newer versions of VMware’s other software that uses SSLv3 and these need to be upgraded too, e.g. SRM, vRealize Operations, VMware tools, etc. VMware has an article on the order to upgrade these applications here – http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2057795
Something else to be mindful of is that external applications that communicate with vCenter and ESXi may currently do so with SSLv3, so upgrading to 5.5 U3b may stop this communication from working as TLS support may not be implemented in the application. This is something that can be tested by going into the Advanced Settings of vCenter and disabling SSLv3 in “SSL.Version” setting and restarting vCenter. Testing this on the ESXi host level can be done by this procedure http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2139396#hostd.
- Cluster size up to 64 hosts (32 previously).
- Each host can run up to 1024 VMs, have 480 logical CPU’s and 12TB of RAM.
- Local ESXi account management is now through vCenter (no more having to rely solely on the root account). This also comes with account lockout and password complexity options.
- Improved audit logging within the host, where vCenter users’ details are now logged against actions within the hosts files.
- Compatibility level 11 now supports VMs with up to 128 vCPUs, 4TB RAM and you can add USB 3 controllers.
- Clustering support for Windows 2012 R2 and SQL 2012 and the ability to vMotion clustered VMs with physical mode RDM’s between hosts.
- NVIDIA GRID vGPU support for VDI VMs
- FT now supports VMs with up to 4 vCPU’s and 64GB of memory.
- FT also now supports snapshots so increases the chances that you will be able to back it up via a VMware level backup. Check with your backup vendor first though.
- FT also now creates duplicate storage for the VM, which means they potentially can be running on local storage and have the secondary copy on local storage on another server.
- Now supports up to 24 recover points per VM
- vSphere Replication can now compress the replication traffic reducing the bandwidth requirements
- Supports interface and bandwidth control of vSphere Replication traffic
- Ability to vMotion a replica without having to fully resynchronise
- vMotion can now be completed across vSwitches (useful for cross cluster migrations) and even across vCenter Servers.
- Long Distance vMotion allows migrations across large geographical areas (assuming <100ms latency). It requires 250Mbit of bandwidth per migration and a stretched Layer 2 network at both sites but can be very useful for moving VMs from site to site with no downtime.
- The backend components of vCenter, such as SSO, Inventory Service and Web Client have now been combined together into a role known as the Platform Services Controller (PSC). This role can either exist on the vCenter Server itself as an embedded PSC, or can be installed outside of the vCenter Server in a separate VM. The PSC can either be installed within Windows or as an appliance.
- The embedded database for vCenter has now been replaced with PostgreSQL which scales much larger than the previous SQL Express editions. For example when using Windows and the embedded PostgreSQL database VMware now supports up to 20 hosts and 200 VMs (much more if you use external DBs) and the vCenter Appliance using the embedded PostgreSQL DB now supports 1,000 hosts and 10,000 VMs. External Microsoft SQL Server support is not available with the vCenter Appliance, but you can still use an external Oracle DB if need be.
- Linked mode is now called “Enhanced” Link Mode and the information is replicated between PSC’s instead of vCenter Servers. This means that no special configuration needs to be done on the vCenter servers and as long as the PSC’s are in the same Single Sign On domain, the vCenters that use them will work together in linked mode. You can even mix appliance and Windows installs of vCenters in linked mode.
- Certificate management has had a huge overhaul too. The PSC now acts as the VMware root Certificate Authority and handles the certificate generation to hosts and VMware solutions. There are various ways this CA can be set up but in most cases setting up the CA as a subordinate in an existing Active Directory would be the best way forward instead of using the self-signed certificates of a default configuration.
- Multisite Content Library is a new feature that allows templates, ISO’s and scripts to be replicated between vCenters. As it is updated at one site it will automatically update the other site(s). This replication can be configured with bandwidth limits and set replication hours if required.
The Traditional vSphere clientVMware has made it clear since 5.1 days that the future client of choice for vSphere Management is the vSphere Web Client and that one day the traditional c# vSphere client wouldn’t exist. That day has not yet come, but more and more functionality is being added to the web client and not being made available to those using the traditional client. Thankfully VMware has added the ability to edit most of the properties of VMs upgraded to hardware version > 9. It’s just the new features that have been added since 5.1 that cannot be changed without going into the web client.
Virtual Volumes (V-VOLs)V-VOLs are a new way of storing and managing disks for virtual machines on storage arrays. At a high level it allows the backend operations of the storage provisioning and management to be done inside the VMware Web Client at a VM disk level as opposed to a datastore level. The end outcome of V-VOL implementation is that when you provision a virtual machine disk you specify what type of size and performance you need for the VM and an automation engine will create the volume on the array directly, in the best RAID Group or Pool for you. Hopefully in the near future V-VOL orchestration will also handle array replication and snapshot of V-VOLs so you won’t need to go into the array or replication devices to configure and manage these features. V-VOLs are in still in their early stages with only certain vendors and arrays currently supporting it. What they support and how they achieve it also varies between vendors. You can use this link to find the vendors and arrays that currently support V-VOLs – https://www.vmware.com/resources/compatibility/search.php?deviceCategory=vvols
Topology ChangesSo if this all sounds great and you want to upgrade ASAP there is one change you need to be made aware of before you dive in to the upgrade notes, prerequisites etc. VMware has changed their supported topologies and has deprecated support for a very common set up in version 6.0. If you run a single vCenter in your environment and have no foreseeable plans of increasing this (i.e. to include a DR site for example), then this doesn’t apply to you. But if you do currently (or plan to), then you most likely also run the SSO, Web Client, Inventory Service etc on your current vCenter servers and it probably looks something like this; Remember that the SSO, Inventory Service and Web Client will be combined into the PSC with version 6. So if you do a straightforward upgrade it will look like this; This makes the upgrade process a much more arduous task than previous upgrades and depending on the environment and the way its set up may involve a full reinstall of vCenter.
ConclusionHopefully this gives you a helpful quick rundown on the new features of vSphere 6 and helps prepare you for the challenges ahead with the upgrade path. If you would further information on any of these points please talk to your Perfekt account manager.
Let’s face it, this topic has been in the back of everyone’s thinking for quite some time, yet few organisations of scale can achieve it. Tape has been around since the 1950s when pioneered by IBM to be a low-cost offline, and portable storage medium. In the last 65 years it has seen significant transformation with the market fairly singularly centred on the LTO Ultrium cartridge format.
LTO-6 is the current generation offering roughly 5TB of compressed data per cartridge, with a roadmap that extends to LTO-7 in October 2015, and LTO-8 which will see this increase even further over coming years.
The reality is that, since I worked at Quantum between 2000-2007, there has been a dramatic change in the paradigm for tape usage. Because of its portability and sequential nature, tape became the reason for people to often dislike backup. Yet backup need not be so dull!
These days, backups are staged to disk first before being copied to tape. Smart backup solutions are able to electronically copy backup content from one second-tier disk system to another, usually in an alternate site, so that the need for making regular tape copies is significantly diminished.
In CommVault’s terminology this is called a DASH copy. DASH is a horrible acronym for Dedupe-Accelerated Streaming Hash, which is about as bad as all of those terrible acronym’s IBM made up in the 1980s for their products. Forget the acronym; DASH just means FAST, and that’s what it does through only transferring new and unique sub-blocks of data between the primary and secondary copy of backup content.
This technology means that you can copy backup (or archive) data in any of these scenarios:
- From Production to DR
- From one or more remote sites to head office/data centre
- From any site to a cloud data centre
- Or all of the above together in any combination
The upshot of this is if you are copying data between disk arrays at your sites then your reliance on tape is significantly diminished.
When DASH copy is implemented, Perfekt often find clients today purchase a 1, 2 or 4 drive tape library or autoloader and make just weekly, fortnightly, or monthly tape copies which are more for archival purposes rather than traditional restore.
Because of the licensing schemes available with the CommVault Capacity License Agreement and the new Solution Bundles, clients are no longer metered on the back-end capacity of backup data stored. You can retain a day, a month, a year or a decade on disk for no additional license charge. You just need:
- The disk space to retain it
- A sufficiently large dedupe database on your media agent server
What do you need to get DASH Copy Working?
There are a couple of “considerations”. A consideration is a problem if you don’t think it through. If you plan ahead, then you will not run into issues.
The first is how to make the initial copy of data. DASH copy is incredibly efficient at moving backup content between sites. However, there is no special magic. That first copy will take some time to move. How long depends on:
- The data volume
- The network link (and how much of it you can use for this)
- A whole bunch of other “overheads”.
The devil is in the detail, so at Perfekt we have devised a simple formula to help you work this out which provides an approximation of the duration, in days, for the initial copy:
|Duration Days||Data Volume GB||Available Link Speed Mb/sec||Constant|
The constant factors in compression, TCP overhead, as well the CV Index and dedupe hash size. The following is a summary of the estimated numbers used for these factors:
- An estimated -15% allowance for the benefits of compression is given
- A +30% overhead for TCP/IP on the link speed
- +5% for the CommVault Index of the Data
- The Dedupe database creates a hash of each 128K block, which is 4K in size (+3%)
- Finally a unit conversion is made to account for data in GB and link speed in Mbps to output a duration in days
As an example a site with 500GB of data on a link with 10Mbps available would take at least 5.7 days to complete the initial copy process.
As an alternative to transferring the initial backups over the WAN, it is possible to seed the data using a portable USB-attached hard drive. In this approach, this hard drive transports the initial data set manually before establishing the regular (eg daily) DASH copy process.
Such a process however has considerable time and effort spent in handling and shipping of the drives, and as a result Perfekt would suggest to consider USB seeding if the WAN transfer time exceeds 14 days.
Of course, once the seeding is complete, since users do not rewrite entire reports, databases, presentations or spreadsheets every day. What is captured is just the sub-block changes, and these are efficiently replicated after the backup to the alternate site.
You can use the same formula as above, but take the daily sub-block change rate of between 2% and 5% of the data volume to determine the nightly DASH copy duration.
Taking our example of 500GB of data in a site with a link with 10Mbps, we could say that this has 2% or 5% of daily change. Pop that into the formula and you will see that the DASH copy duration on the same 10Mbps link is:
- 2%: 2hrs and 45 mins
- 5%: 6 hrs and 51 mins
These are certainly achievable in an overnight window.
We recommend that a minimum link speed of 10Mbps is used to support DASH copy. This ensures that it can make that first copy in sufficient time, but is also fast enough to handle the nightly copy should there be a rare occasion where something dramatic causes the change rate to be 10 or 15%. It may take a day or two to catch up. If the link was too slow, it may fall behind for so long that there is an exposure in getting the data off site.
With ongoing data growth and general system changes it is important to monitor transfer times of the DASH copies to ensure that they are completing in a reasonable time period and not lagging behind. Perfekt suggests that this is done with Aux Copy Fall Behind Alerts in console progress reporting.
Also the DASH copy summary report should be reviewed each month to monitor the overall health of the copies. This will help identify sites where greater link speeds may be required in the near future.
What if you don’t have a second site? Look up in the sky!
Not a problem. There are oodles (the technical term meaning more than you could imagine) of cloud providers wanting to have you store your backup data with them. There are two ways of storing CommVault backup data in cloud storage (I hate using the word “the cloud” assuming there is only one. The reality there are so many offerings. They are all different. Their costs are not the same and a good number will be out of business in less than 5 years).
The first way is to DASH copy to a cloud provider. This is preferred. Using this approach you would stand up a virtual CommVault media agent server in the cloud and purchase some cloud storage. The media agent is doing some hefty work, so the only gotcha here is the compute costs of virtual servers if your chosen cloud provider charges this way. It is best to not use this type of model for backup unless you pilot the process, measure the IOPs and extrapolate this within the costing model of your cloud provider.
The second way is to move data directly to some type of cloud storage without DASH copy. The issue with this is that you usually pay cloud providers per GB per month, and any attempt to push large data volumes to a cloud service without the benefit of dedupe will be unaffordable after a few years of a lengthy backup retention strategy. [It is affordable if you only want 1-6 months of content but that is not the normal business data retention cycle for most organisations, especially if you are looking to remove tape altogether. Any longer than a few years and you will quickly work out that you can buy a small tape library with LTO-6 drives and have plenty of change compared to the cloud costings].
Removing Tape – What Disk is Needed?
In such a topology, tape provides two key functions:
- A point in time complete “archive” copy beyond the longest disk-based retention period
- A copy of data as a last chance of recovery if all else fails
Because deduplication means that you can quite effectively retain many years of data copies this negates the need for point 1. Addressing point 2 is a business decision, and many sites do not have this today.
Back on point 1, there are a few basic factors that will need to be determined in order to estimate the size of disk array to retain your online backup content:
- How large is the first full copy of data: typically we see about 20% reduction due to compression and some deduplication
- Retention: for how many years you will retain backup copies
- Number of backups: eg 5 days per week or 7 days per week, 52 weeks per year
- Daily rate of change: typically between 2% and 5%, depending on the workload
The disk space required can then be approximated using this formula:
|Disk Space Required (TB)||Protected Data Volume (TB)||Allowance for compression & some dedupe||Number of backup days retained||Rate of daily change, 2-5%|
So in a site with 10TB of data with: normal 20% savings on the first backup, backups occurring 5 days per week, 52 weeks per year, online retention of 10 years, 2% of daily change; the usable disk volume required is then 528TB. Utilising 4TB nearline SAS drives, this could be accomplished in a storage array with dense enclosures in a tidy 9 rack units of footprint!
Of course this is simplified, volumes will start out smaller and grow with increased retention, and understandably there will be primary data growth and fluctuations to usage patterns over the retention period. This provides an indication of likely data capacity required.
Aren’t Spreadsheets Wonderful!
To extrapolate running costs of the required backup storage, here is a quick comparison of the disk array outlined above for the second (remote) site copy of the data, retained for 10 years:
|On-premise/co-lo high density storage array, 528TB usable Purchased up front, 10 year vendor support, inclusive of running costs||Cloud storage based on:
Ingest Tier of $0.0259GB / month
Storage Tier $0.012GB / month
Incrementally growing over 10 years
Compute (to run Media Agent) $1.169/hr
|$334K ex GST||$393K ex GST
Does not include costs for retrievals, and retrievals will be “problematic” at best, only to be required if all else has failed.
So, not a great deal in it when you factor this over a 10 year period; but useful to benchmark the differences between the available options. Of course, this is to simply protect 10TB of data without taking into account its own growth due to new workloads etc. The operational note on retrieving data is important. The on-premise storage will be very simple for restoration, where the cloud-based storage will be very slow (“tape-like”) and only to be used in emergencies.
And if the numbers just don’t work, there is still tape
Full scale recoveries are rare and mostly restore jobs are for small data sets. Depending on data volumes, retention requirements and other business methods, we are finding today that tape is still a very low-cost way of creating archival data copies. Made once per month, for example, a single or dual-drive LTO-6 autoloader is all that is needed to push a retention copy to tape which is probably never needed, but gives surety and another process to show strong data governance.
Should any of this be within your thinking, then give the experts at Perfekt a call. We love to help with your backup strategies.
Traditional Feature Based Licensing (also known as a la carte)Back in the year 2000 when CommVault entered the Australian marketplace, it had a licensing scheme similar to many other backup products, where features purchased matched each environment’s components.
- Agents for specific operating systems, databases, applications. This included Windows, Unix, SQL, DB2, Lotus Notes, Novell, Exchange, Active Directory, and so on. For each one of these environments you needed an agent so that the data could be protected to allow for the best recovery
- In addition, you had to license the Media Agent (backup) server, or multiples in a multi-site environment
- You licensed the tape library and each drive
- You licensed the capacity (in TBs) that you wrote to in a backup-to-disk environment
- You also had to license other options and features from a very rich and comprehensive list
- Many times customers purchased these in special bundles to reduce the price
- The feature set grew with the addition of new functions. One key addition was that of deduplication (aka Advanced Disk Option). This meant that clients licensed the disk space that CommVault’s deduplication system wrote to, measured in TB. This was more expensive that the “Standard Disk” backup method, yet you could retain many more backups in significantly less space; thus improving data protection and importantly, speed of recovery.
- There were also a range of features of email and file server archiving, content indexing and more.
Capacity Based Licensing (CLA)
CLA Meter ExampleCapacity Based Licensing was introduced around 2010 with version 8, and this dramatically simplified the way in which licenses were consumed. Instead of being tied to a specific feature set, sites were licensed by the number of TBs that they protected at the “front end”. This is equivalent to measuring the size of a single full backup of all important data and basing the licensing on that volume. The CLA scheme became very popular because it meant any change to the environment, eg moving from Novell to Microsoft, didn’t mean that you had to purchase new features. Importantly the CLA scheme allowed as many “back end TBs” of data to be retained, without regard for retention period or multi-site copies. Organisations were then able to create DR copies of their backups for no extra licensing and significantly if not completely reduce the need for tape in their data protection scheme. The CLA “front end” TBs were measured in a few ways:
- Data Protection Enterprise DPE – all you can eat in the way of backup, included all features
- Data Protection Advanced DPA (used to be called ADM) – suited most virtualised environments except very high, end
- Data Protection Foundation DPF (for server-level backups only, without application agents), and very suitable for physical server data protection
“Solution Bundle” LicensingA new feature set launched in late 2014 meant that certain CommVault features are now very affordable. One popular example is Hypervisor based backups, which are becoming an industry standard, and in acknowledgement CommVault have released simplified licensing at the price point of its much less mature competitors. Available as standalone or in addition to a CLA, the Cloud Simpana “cSim” licensing can be purchased in packs of 10 VMs or by hypervisor processor socket (similar to VMware licensing). When purchased by processor socket it allows for unlimited VMs to be protected on the licensed ESX or Hyper-V hosts. This is ideal for many organisations as it makes it easy to accommodate for growth: add another ESX or Hyper-V host? Don’t forget to get backup licensing for it! VMs protected under cSIM licensing do not consume the CLA TB-based licensing, and cSIM licensing also provides dedupe functionality, tape support, media agents, DR copies, and so on. This is great value for new and existing customers alike. New customers receive all basic licensing required to run a Simpana environment with dedupe, tape and VM backup functionality, while existing CLA customers free up significant amounts of backup license utilisation allowing for growth in application level backups. CommVault have a range of offerings in this new solution bundle category that work as an adjunct to CLA licensing (it is important to note these do not intermix with traditional feature licensing and they do require version 10 of CommVault). The areas covered by the solution bundles include: Virtual Machines
- By socket or by 10-pack of VMs (as described above)
- Intellisnap (hardware snapshot integration) and end user self-restore add-on
- VM cloud management: provision VMs locally or in the cloud, for example for spin-up of VM for site recovery/testing, add-on
- VM lifecycle management, whole of VM archiving for dormant VMs, for example, add-on
- Basic backup and recovery, per device up to 2TB ea
- File Sharing eg corporate drop-box replacement with “Edge Drive”
- Endpoint compliance search add-on
- Bundle of the above
- Entry (7TB), mid-range (14TB) and enterprise (21TB) bundles
- Email archive and content indexing, per mailbox
- Compliance archive add-on
- Bundle of the above
- Stay with feature licensing or convert to CLA?
- Straight CLA or supplement with the new solution bundles?
The reason for this post is that I often come across CommVault environments that have hosts which have not been decommissioned correctly. This results in the unnecessary consumption of both licensing and storage space/media. It seems that the correct way to decommission servers from Simpana isn’t something that gets highlighted enough during CommVault training. This also seems to be one of the first things people get wrong when they inherit a system. Backup administrators often complain that jobs aren’t being aged correctly, or they have unexpectedly used up all their licensing. Incorrectly decommissioned servers are the #1 reason for this.
Depending on how the server you are decommissioning is protected, will change how it needs to be removed in Simpana. Licenses may need to be released, and backup jobs may need to be cleaned up.
The following flow chart describes the considerations and procedures required to successfully decommission servers from CommVault:
Here are the procedures outlined in the flow chart:
A. Go through each installed agent and note down with which Storage Policies it is associated. It is important to do this first because after releasing the license this will become more difficult.
B. Right click on the host in the CommCell browser and select All Tasks -> Release License. More info is available on BOL. In completing this step you have now freed up the licenses utilised on this server. For people with capacity based licensing this will only be reflected when Data Aging is run (normally scheduled to run daily at 12pm).
C. Here is where most people get caught out. If you remember back to your CommVault training you will recall that retentions for copies are specified in days and cycles. Days are pretty straight forward and cycles refer to the number of complete backup cycles (new ones are started every full backup). Both need to be met before the job can be aged. Wait a minute, how can that be? The cycles dependency will never be met for the last full backup cycle on a deconfigured agent since it will never produce any new jobs. That’s right. Simpana will keep the last backup cycle (full + any incrementals) forever until you manually delete the jobs.
There are two solutions to this challenge:
- (D.) Manually going through each copy deleting the backup jobs, and/or creating calender items to remind yourself to clean those jobs up later, or…
- Enabling the ‘Ignore Cycles Requirement on Deconfigured clients‘ option.
I strongly suggest doing the latter, since it makes management much easier. It’s really how most people would expect the system to operate.
E. Simply right click on the subclient and select delete. Historic jobs will be aged as per normal using the configured days retention. Since this is subclient level stuff, cycles don’t apply, so you don’t need to worry about manually deleting any jobs. To restore VMs from these old backup jobs, do your Browse/Restore operation on the backup set, or alternatively browse history at the backup set level.
F. Browse the properties of the subclient associated with the VM, and delete it from the contents. For capacity based customers, licensing will be freed up when the subclient next runs a full backup.
I hope you found this useful, don’t hesitate to contact me with any comments or questions.
1. Pre-EmptThey say prevention is better than cure, here is a list of things that you should be asking yourself:
- Have you reminded your users to scrutinise every email. Most CryptoLocker infections are from emails convincing users that they have received a speeding fine or a package tracking notice that required them to download and run an executable. Being asked to run executables from the internet should always be a cause to be wary. If you smell deceit, hit delete!
- Are you using a robust backup product? When was the last time you tested restoring from backups? Do you keep backups off site? Don’t get caught out. Good, reliable backups are CRITICAL. How long is your backup retention? What if you only notice files encrypted weeks later…
- Do you have a transparent proxy that filters web traffic? Palo-Alto, Sopho UTM, McAfee, Checkpoint etc.
- How about on your workstations? Is the antivirus up to date?
- Have you applied an auditing policy on your file server? This won’t stop CryptoLocker from running, but will making finding the infection much easier. Simply enable file system auditing, and apply an auditing rule on your file shares to capture, file creation/deletion. This way you can find out from the logs which user logged into which machine deleted documents and created encrypted versions.
- Do you trust your antivirus? There are hash based file blocks you can configure using Group Policy to add further protection. This will from a Windows policy level stop CryptoLocker from running if it is a variant that matches the policy.
2. Identify SourceOnce you are infected your first step should be to identify which workstation is infected. Otherwise you can restore your files all you like, they will just get re-encrypted.
3. Clean the InfectionCryptoLocker is not a worm, the encrypted files won’t infect anyone else, and unless you run the executable on another machine the infection will not spread. Still it’s best to isolate the workstation as soon as possible by unplugging the network cable. Use your usual malware removal tools such as Malwarebytes to clean the system of CryptoLocker.
4. Identify Damage DoneNow its time to determine the extent of the damage. Scan the user’s mapped drives for .encrypted files. Here is a simple powershell two-liner to do it. Change the value of $path to the location you want to scan. The script will output a text file called encryptedList.txt which will contain all the files that have been encrypted.
6. CleanOnce you are satisfied that you have restored everything, you will want to delete the encrypted files, and the DECRYPT_INSTRUCTIONS.html files that got created. Here is another simple powershell script.
7. Post-MortemNow that you have survived CryptoLocker, its time to ask the important questions.
- How did the infection occur? Did it originate from an email? How did it get past our defences? Was the virus vendor aware of this variant?
- Were you appropriately prepared? Did you know how to use your backup product or were you second guessing every option?
- How do we stop future infections? Better web filters? Better antivirus? Do users need further education?
Western Water has installed Hitachi Data Systems’ new Unified Storage System (HUS), which will deliver block and file storage from one platform.
Western Water will be working closely with Hitachi Data Systems gold partner Perfekt, to align their IT strategy with the requirements of their business. They will be able to implement reliable, secure and highly available information technology solution supported by best practice processes and services, which will ensure long-term value to Western Water’s business and customers.
Jeff Smith, assistant manager and systems/network administrator at Western Water, said “Over the last two years, Western Water has seen a steady rise of the amount of information it manages due to higher quality aerial images and geographical information systems data, email archiving, CCTV footage and an increase in social media and real-time applications. As such the organisation needed to plan for its future growth as well as meet its current information management and storage requirements. “The Hitachi Unified Storage solution was chosen due to its ability to support Western Water’s ‘Next Generation Model’ – allowing a single management framework supporting file, block and object storage and integrating seamlessly with the existing environment. The solution will allow information to be more available when required in order to make critical decisions for the business.
Joining over 1,600+ professionals, Perfekt’s Sales Director, Mark Sakajiou with Joe Jacobs, Gartner AE for Emerging Tech, attending the Gartner Symposium/ITxpo 2018, Australia and New Zealand’s Largest Gathering of CIOs and Senior IT Executives. “…staying at the centre of technology, business, strategy and inspiration remain an essential commitment for us”, says Mark. “We are continuously learning, growing and uncovering emerging trends and expert insights in an effort to both enrich and exceed our clients’ strategic technology and business expectations”. Click here to find out why its an exciting time at Perfekt.