IT managers have long envied the power of all-flash storage for accelerating their most demanding workloads. Unfortunately, the much higher cost-per-terabyte for all-flash storage compared to traditional spinning disk storage has—in many cases—kept these ideas in the realm of wishful thinking. A data reduction technique called deduplication, however, is bringing all-flash storage into the realm of economic reality.
Read More
For years, traditional spinning disk technology reigned as the only option for enterprise storage. While the speed and performance of other infrastructure components improved exponentially, hard drive technology has improved only incrementally with performance remaining largely the same since the year 2000 when the first 15K RPM drives were released. Traditional hard drive storage has been, and remains, a common bottleneck, making otherwise powerful IT systems sluggish and inadequate for handling the demands of modern workloads.
All-flash storage arrays are all the rage among IT teams looking to coax as much performance as possible out of their storage infrastructure. Thanks to the generational maturity of the technology combined with data reduction techniques like inline deduplication and compression, all-flash storage is no longer the economic impossibility it once was.
Jealous of the pure power of all-flash but concerned about cost? The latest generation of all-flash arrays have a couple of tricks up their sleeve that make the difference in usable capacity and cost between flash and spinning disk much less significant: inline deduplication and compression.
Meeting with clients, I’m hearing a lot of folks complaining about two common problems: First - Backups are taking far too long to complete. Second – Recovering a single file is a long, painstaking process.
Read MoreHow do you know if your NetBackup Domain has become too large?
When NetBackup Domains grow beyond Symantec best practices guidelines, performance and manageability can begin to degrade. During my 15 years as a NetBackup backup engineer, I’ve found that the following 5 criteria are worth considering:
Catalog size - As the catalog grows, the time required to protect it can become excessive. The catalog backup is a database backup. The synchronization checkpoints and locking required for other NetBackup jobs will cause extra processing time.
Firmware Management – The Evil Mistress
Firmware management is not traditionally a topic that anyone gets excited about. In fact, it’s typically relegated to the bottom of the IT team’s list of priorities because it’s often painful and time consuming.
From a management perspective, budget spent on firmware management is very difficult to justify and ROI is nearly impossible to measure. But firmware is an evil mistress - if it’s not attended to often enough - out of date firmware can stall an OS upgrade, prevent the introduction of new hardware or worse, cause an unexpected outage of critical equipment in the data center.
We all have stories of servers, switches, storage arrays, etc. that ran fine for years until one day, the system crashed for no reason. We later discovered that there was a critical bug fix quietly released two revisions of firmware prior to the one we were running. Grrrr!
Read More
The addition of Storage Lifecycle Policies (SLP’s) in Symantec NetBackup 7.6 has provided backup administrators with a very effective tool for managing backup/snapshot, image, duplication, and replication.
While SLP’s simplify these tasks, careful consideration is required when you’re monitoring and managing the policies and the components utilized in your NetBackup environment.
The SLP administration window we all know and love, shown below, is a great tool for managing SLP’s. But there are other things you should consider to help manage your SLP’s.
To help manage your SLP’s and make sure they’re optimized, please consider these (8) additional configuration tips.
It happens all the time – a customer has a problem that’s tough to solve. I was recently asked how to best protect a virtual environment – stretched between two data centers. The customer wanted all administration through vCenter with instant recovery and the ability to meet stringent RPOs and RTOs. And the catch – they wanted the backed up data to be stored on a different vendor’s array utilizing a different protocol - NFS instead of Fibre Channel (FC) block storage.
Read MoreIs that light you see at the end of the proverbial tunnel, or is it the headlight of an oncoming train?
The future of your professional career just might depend on your ability to successfully lead your company to the cloud. So many things to consider…
Yes, the cloud offers many advantages to your business including agility and high levels of fault tolerance, but in and of itself, the cloud does not release you from backing up your data in order to protect yourself from user error, data corruption, or data loss.
Just like provisioning new applications or scaling current applications, you need to consider every angle when protecting your data that lives in cloud.
Subscribe to the Daymark Blog
Latest Posts
Browse by Tag
- Cloud (68)
- Security (42)
- Microsoft (38)
- Azure (35)
- Partners (32)
- Data Protection (29)
- Data Center (26)
- Backup (24)
- Daymark News (23)
- Compliance (20)
- Data Governance (18)
- Veritas (18)
- Virtualization (18)
- Storage (17)
- CMMC (13)
- Cloud Backup (13)
- Cybersecurity (13)
- Disaster Recovery (13)
- Managed Services (13)
- Government Cloud (10)
- Featured Gov (9)
- Industry Expertise (9)
- GCC High (7)
- Networking (6)
- AI (5)
- Hybrid Cloud (5)
- NIST SP 800-171 (3)
- Reporting (3)
- Copilot for Microsoft 365 (2)
- GDPR (2)
- Services (2)
- Cloud Security (1)
- Mobile (1)
- Reporting-as-a-Service (1)