Servers Archives | IT Business Edge https://www.itbusinessedge.com/servers/ Wed, 25 Oct 2023 21:44:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Top Server Management Software Tools 2022 https://www.itbusinessedge.com/servers/server-management-software/ Mon, 13 Dec 2021 18:32:39 +0000 https://www.itbusinessedge.com/?p=139912 Server Management Software provides a single view into the health and security of your servers. Explore top tools now.

The post Top Server Management Software Tools 2022 appeared first on IT Business Edge.

]]>
Servers are complicated machines, requiring cool rooms and regular updates and maintenance to run properly. The problem is, many IT departments are understaffed, and adding server management on top of IT’s already full workload can be overwhelming. Server management software can help lessen the burden on in-house IT and keep the machines running well.

Server Management Software Overview

What is Server Management Software?

Server management software is a set of tools that IT departments use to monitor the overall health of their servers and install new software and updates. Some server management tools also include optimization features that can increase the efficiency of the machines and help them run better. Businesses should also consider server management software that includes capacity management, helping them plan in advance for when they’ll need new hardware.

Some businesses may decide to outsource their server management to a managed services provider (MSP), but someone on their team should still be familiar with the software.

Also Read: PagerDuty Report: Stress on IT Teams on the Rise

Benefits of Server Management Tools

The best server management software should automate simple tasks, so businesses can reduce their workloads and make the servers more efficient.

Automation

Server management software should be able to automatically detect anomalies in the hardware and begin the remediation process while alerting the IT department to their presence. Some tools can also automate patch installation and backups to keep the system more secure.

Also Read: Top Automation Software for Managing IT Processes

Faster response times

With the capability to send alerts, server management software helps IT teams respond faster to issues and increase capacity before it’s maxed out. These alerts tell IT admins when new patches or updates are available, if there are anomalies within the system, or when storage space is getting low. If integrated with IoT devices, they can also alert the business if the conditions in the server room become unfavorable, for example, if the temperature gets too high.

Lower operating costs

Because server management tools make servers run more efficiently with less hands-on management by the IT team, businesses can lower their operating costs. They’ll also know well ahead of time when they need to purchase new hardware, so they can shop around and make the best decision, rather than panic-buying a new server because they need more storage. 

Customization

Businesses should be able to customize their server management software to get the alerts that are most necessary for their business and create reports that provide relevant information. The dashboards should also be customizable, so businesses can gather important data from a quick glance. 

Top Server Management Software

Businesses looking for server management software should consider the following tools, chosen for their high user reviews and feature offerings.

ManageEngine OpManager

ManageEngine OpManager is a network monitoring solution that provides visibility into a variety of network devices, including servers, firewalls, and routers. It works with physical and virtual Windows and Linux servers, allowing businesses to monitor their CPU, memory, and disk usage. Real-time network monitoring provides insights into latency, speed, errors and more, and includes over 2,000 metrics to monitor. The pricing is based on the number of devices the business plans to monitor with the system.

ManageEngine OpManager logo.

Key features

  • Customizable dashboards
  • Multi-level alert thresholds
  • Network mapping
  • Real-time performance monitoring
  • Remote server monitoring

Pros

  • Responsive and helpful customer support
  • Doesn’t require a lot of configuration to connect with most devices
  • Can give users access to only the machines they work with to simplify monitoring

Cons

  • Error messages are sometimes unclear
  • The reporting lacks customization options

Paessler PRTG Network Monitor

PRTG Network Monitor by Paessler is a network monitoring system that helps businesses manage all of their applications, servers, and many other devices. It offers flexible alerting allowing users to choose whether alerts come through email, SMS text message, or through the mobile application. They can also set alert schedules, so no low-priority alerts come through outside of working hours. The maps and dashboards help organizations visualize their connections and get live status information. Businesses can build their own visualizations or choose from pre-formatted templates. There are five different pricing tiers to choose from, and each license only includes one server.

Paessler PRTG Network Monitor logo.

Key features:

  • Automated failover solutions
  • Distributed and remote monitoring
  • Android and iOS mobile applications
  • On-demand and scheduled reporting
  • Several user interface options

Pros

  • Provides a high level of detail in reporting and monitoring
  • Simple and quick installation
  • Helpful video guides

Cons

  • Some users complained about the technical support
  • Can only categorize or move one item at a time

Site24x7 Server Monitoring

Site24x7 Server Monitoring provides important performance metrics on all servers, including cloud and on-premises. The IT automation features automatically resolve performance issues and implement fail-safe actions. There are more than 100 plugin integrations currently available, but developers can also build their own using Python, Shell, or similar programming languages. Site24x7 offers four pricing plans that vary based on the number of servers and monitoring interfaces.

Site24x7 Server Monitoring logo.

Key features

  • Docker and Kubernetes monitoring
  • Failover monitoring
  • Root cause analysis
  • Works with Windows, Linus, FreeBSD, and OS X servers
  • In-depth reporting

Pros

  • Easy to set up and has a good UI
  • Multiple options for agent installation
  • More affordable than some competitor tools

Cons

  • Doesn’t include as many management features as similar tools
  • Doesn’t offer some Azure Classic services

SolarWinds Server & Application Monitor

SolarWinds Server & Application Monitor is a comprehensive monitoring tool that covers over 1200 applications and systems, including AWS, Microsoft, and Apache. It’s part of the Orion Platform from SolarWinds, making it a great choice for businesses that already use Orion. End-to-end visibility makes it easy for businesses to see how their devices are performing and how they interact with each other. There are two different licensing options: a permanent license and a subscription-based license.

SolarWinds Server & Application Monitor logo.

Key features

  • Remote monitoring and management
  • Infrastructure and application mapping
  • Server capacity planning tool
  • Public, private, and hybrid cloud monitoring
  • Domain health checks

Pros

  • REST API allows monitoring of any infrastructure
  • Good out-of-the-box monitoring templates
  • Easy to use and configure

Cons

  • Businesses should consider additional security measures
  • Dashboard creation is not very user-friendly

SyxSense Manage

SyxSense Manage is an endpoint management and security solution that covers devices with all major operating systems, including Windows and Linux. The patch deployment features help organizations prioritize and apply patches to best limit downtime. The cloud-native platform provides helpful security reports, including risk assessments and task summaries to keep your devices secure and running correctly. SyxSense provides two packages, allowing businesses to choose the feature sets that work best for their needs.

Syxsense Manage logo.

Key features

  • Device discovery
  • Third-party patching
  • Hardware and software inventory
  • Custom data fields
  • Remote access and management

Pros

  • Makes it easy to patch remote servers
  • Deployment is fast and simple
  • Doesn’t require on-premises infrastructure

Cons

  • Support requests can sometimes take a while to get resolved
  • Some users reported lag issues within the interface

Also Read: How to Protect Endpoints While Your Employees Work Remotely

Atera

Atera is remote monitoring and management (RMM) software that provides complete network visibility and control. Organizations get real-time monitoring and alerts on system resources, active users, Windows updates, and more. Additionally, the platform has built-in automation for common IT tasks, like checking for updates or deleting temporary files. The patch management features can automatically apply new patches for Microsoft, Java, Driver, and Adobe applications. There are three pricing tiers available, and the licenses are priced per user.

Atera RMM logo.

Key features

  • IT automation and scripting
  • Patch management
  • Ticket queues and scheduling
  • Android and iOS mobile applications
  • 24/7 support

Pros

  • Great support any time of day
  • Easy to use
  • Doesn’t limit the number of devices

Cons

  • Some customers ran into implementation issues with AnyDesk
  • There may be some compatibility issues with Windows 11

Traverse by Kaseya

Traverse by Kaseya is a network monitoring tool covering private clouds, hybrid clouds, virtualized infrastructure, and networks in different locations. It’s easy to integrate with popular ticketing, messaging, and business intelligence tools, making it easier to share files and respond to alerts. As it learns more about how an IT department responds to alerts, Traverse will adjust them accordingly to keep the network secure without overwhelming administrators. Licenses are priced per user, and there are a couple of different tiers to choose from.

Traverse by Kaseya logo.

Key features

  • Predictive analytics
  • Service containers
  • Service-level agreement manager
  • Network configurations
  • Event manager

Pros

  • Simplifies infrastructure monitoring and management
  • Reporting and dashboards provide helpful information
  • Helpful remote support and diagnostics tools

Cons

  • Some users had trouble with setup and configuration
  • Initial reporting can be confusing without customization

Choosing the Best Server Management Software for Your Business

Businesses looking for server management software should consider their current infrastructure and ensure the platform they choose is compatible with all of their devices. Then, they need to decide which management features they need or whether they only need monitoring capabilities. 

Smaller businesses should consider standalone monitoring and management tools, while enterprises may opt for full IT asset management suites. Additionally, decision-makers should read user reviews and take advantage of free trials before signing a contract.

Read Next: Why Business Continuity Management Matters Now More Than Ever

The post Top Server Management Software Tools 2022 appeared first on IT Business Edge.

]]>
Hyperscalers: Will They Upend the Mainframe Market? https://www.itbusinessedge.com/cloud/hyperscalers-will-they-upend-the-mainframe-market/ Mon, 22 Nov 2021 21:28:29 +0000 https://www.itbusinessedge.com/?p=139841 These mega tech companies are taking the opportunity seriously and are making headway with their mainframe programs.

The post Hyperscalers: Will They Upend the Mainframe Market? appeared first on IT Business Edge.

]]>
As seen with the latest quarterly earnings reports, the hyperscalers continue to post strong growth with their cloud platforms. Amazon’s AWS remains dominant, with sales increasing 39% to $16.1 billion.  As for the No. 2 player, Microsoft, the software giant’s Azure business grew 50%. Then, there is Google, which saw its cloud operations generate $4.99 billion, up 45% on a year-over-year basis.

There is much room for growth for the hyperscalers. According to IDC (International Data Corp.), the spending on cloud technologies and services is forecasted to go from $706.6 billion in 2021 to $1.3 trillion by 2025.

Yet the market is getting more competitive, so the hyperscalers will need to seek out new corners of the cloud market. One is actually mainframes. With the urgency for digital transformation and the changes wrought from COVID-19—such as with the move towards more flexible work—large enterprises are more willing to rethink their legacy systems.

Let’s see what this means for the hyperscalers.

Why the Mainframe? 

The mainframe market is enormous. Based on research from BMC, this technology is used by:

  • All ten of the world’s largest insurers
  • 92 of the world’s top 100 banks
  • 18 of the top 25 retailers
  • 70% of Fortune 500 companies

Mainframes often operate mission-critical operations for large enterprises, such as payroll, customer accounts, insurance claims, airline reservations, and credit card processing just to name a few.

IBM currently dominates the mainframe market. It not only sells the machines but also owns major software platforms. The main ones include the IMS and Db2 databases as well as CICS (Customer Information Control System), which manages sophisticated transaction processing.  

“Hyperscalers are moving into the mainframe market as they recognize that the majority of leading businesses in finance, government, insurance, and communications continue to run mission-critical applications on the mainframe,” said Nicole Ritchie, head of product marketing for Software AG Mainframe Integration Solutions. “As often quoted from SHARE and within mainframe circles, 70% of enterprise data resides or originates on the mainframe and up to $3 trillion in daily commerce flows through mainframes. 

“That’s a lot of business.”

And yes, the IT budgets are large. More importantly, senior managers are looking to make investments so as to be more agile and innovative to provide better customer experiences and fend off highly funded startups.

Also read: The Mainframe Will Drive Digital Transformation

The Mainframe Efforts of the Hyperscalers 

The hyperscalers are in a prime position to benefit from mainframe modernization. These companies have huge financial resources, powerful software, global cloud infrastructures, and thousands of talented engineers. There’s also the advantage of having many large customers that likely have mainframe installations.  

All the hyperscalers have partnership programs for mainframe migrations. For example, there is the AWS Mainframe Migration Competency Program. This includes consulting and software partners like Advanced, Blue Age, Deloitte, Micro Focus, TSRI, Accenture, and Wipro. 

Although, in the case of Google, the company acquired Cornerstone Technology in 2020, a provider of mainframe migration software. Google transformed this into the G4 platform, which can translate complex COBOL, PL/I and assembler programs into Java and microservices. This is then integrated with Kubernetes containers.  

“There is an opportunity for hyperscalers for pull-through revenue,” said Rob Anderson, vice president of marketing and product for application modernization at Advanced. “Each offers myriad products that are cloud-native—locked away from being accessible by the mainframe. 

“By moving the mainframe to the cloud, they get the consumption revenue of those workloads and a massive, complex environment to sell solutions into.”

Will the Mainframe Go Cloud?

There are a myriad of successful use cases for the partnership strategy. Just look at the example of the migration of the logistics system for the U.S. Air Force (USAF) to Microsoft Azure. The platform served over 260,000 users a day, and the U.S. General Accountability Office (GAO) indicated that it was one of the federal government’s top 10 most critical legacy systems in need of modernization.  

The requirements were definitely difficult. There could not be extensive retraining for the users. There was also a need for high levels of security, availability, and recovery.  

“The USAF project entailed re-platforming their Unisys mainframe system to the cloud, converting legacy code to COBOL.NET and migrating the data to Azure SQL,” said Scott Silk, CEO of Astadia (the company that participated in the project). “As a result, the Air Force was able to preserve their existing investments in the system while modernizing and preparing the way for further digital transformation.”

By being in the cloud, the USAF system benefited from having access to modern technologies, including analytics and AI. There were also lower operating costs.  

“Historically, we all think of mainframe modernization projects as being complex, high-stakes undertakings,” said Silk. “With automation, we’re seeing that change dramatically.”

Yet automation is not always a simple solution. Cultural issues can easily derail complex projects, especially when there continues to be the use of old approaches. There is also the temptation to “boil the ocean.”  For the most part, there needs to be realistic goals and the use of modern DevOps—which are not necessarily the case with mainframe projects.

Total migration is not necessarily the right path either.

“The reality is that many businesses are reluctant to give up the processing speeds, reliability and security of the IBM Z,” said Ritchie. “Hyperscalers would be well served to consider a hybrid approach where mainframe and cloud work together as a much more desirable state.”

Read next: Mainframes Still Matter in the Digital Business Transformation Age

The post Hyperscalers: Will They Upend the Mainframe Market? appeared first on IT Business Edge.

]]>
PagerDuty Report: Stress on IT Teams on the Rise https://www.itbusinessedge.com/it-management/pagerduty-report-stress-on-it-teams-on-the-rise/ Fri, 30 Jul 2021 14:00:00 +0000 https://www.itbusinessedge.com/?p=139340 New survey highlights the rise in critical IT incidents and how that has increased pressures on IT teams in all sectors.

The post PagerDuty Report: Stress on IT Teams on the Rise appeared first on IT Business Edge.

]]>
PagerDuty, a provider of a platform for managing IT incidents, published a report this week that finds the number of critical incidents IT teams have needed to address has increased nearly 20%.

The report finds IT teams on average experienced 105 critical incidents per month. Critical incidents are defined as those involving high-urgency requests for services that were not auto-resolved within five minutes but acknowledged within four hours and resolved within 24 hours. Some sectors such as online learning platforms, collaboration services, travel, non-essential retail, and entertainment services experienced an 11 factor increase in the number of critical IT incidents that needed to be addressed. At an average of 105 critical incidents a month per organization in 2020 the annual cost per organization for these incidents is $158,760.

Based on data collected from 16,000 organizations that was created by more than 700,000 users, the report suggests the level of stress IT teams have experienced during the COVID-19 pandemic has been considerable. On average, the report finds each IT incident requires 1.2 members of an IT team about 126 minutes to resolve. The average incident costs $126 in engineering time. 

About a third of incidents appear to have occurred outside of normal working hours, resulting in members of IT teams working the equivalent of two extra hours per day, totaling an extra 12 weeks of work per year. Specifically, there was a 9% increase in interruptions between (6:00 p.m. and 10:00 p.m.), and a 7% increase in holiday/weekend interruptions. An interruption is defined as a non-email notification such as a push notification to a mobile phone; text message or phone call generated by an incident. The number of interruptions during normal business hours increased 5%, while there was a 3% decrease in the number of interruptions when end users and the IT staff that supports them are normally sleeping.

Also read: Survey: Advances in IT are Starting to Have Major Impact

Featured IT Asset Management Software

1 Zoho Assist

Visit website

Zoho Assist empowers technicians to manage IT assets effortlessly. Automate administrative tasks via script or batch files, control the running status of a program, and view and manage hardware drivers, software, users, groups, and printers, with features like command prompt, task manager, and device manager.

Learn more about Zoho Assist

2 SuperOps.com RMM

Visit website

SuperOps.ai stands as a game-changing IT Asset Management software, seamlessly integrating automation for software and Windows management through intelligent policies. Its unique feature lies in built-in asset management within the ticketing and helpdesk system, ensuring a holistic approach.

Elevate your asset management strategy with SuperOps.ai and experience streamlined operations, proactive compliance, and unmatched efficiency.




Learn more about SuperOps.com RMM

About 10% of users of the platform experienced 19 non-working hour interruptions a month, which is ten times that of the median responder. It’s not clear to what degree that 10% represent members of the IT staff that have unique skills or are simply individuals that are rising above and beyond the call of duty, notes Sean Scott, chief product officer for PagerDuty. “There’s a lot of burnout potential for these individuals,” he says.

PagerDuty defines an “overworked” responder as a member of the IT staff that has seven non-working hour interruptions a month, which is three the median for responders per month.

Overall, the PagerDuty platform ingests roughly 30 million events per day, which generates about one million alerts resulting in more than 500,000 interruptions that go beyond an email notification. There are also roughly 55,000 critical incidents a day. 

The report finds that the absolute volume of interruptions on a year over year basis only increased 4% in 2020. The overall percentage of IT staff being interrupted is flat or trending downward. That data suggests overall companies are doing a good job spreading the load equitably across their employees.

Of course, no two organizations are exactly alike in terms of how they manage IT. There are many more organizations that don’t have an IT incident management platform that do. IT professionals might want to take that into account when they determine what type of organization they just might want to work for next. 

Read next: Work-From-Anywhere Requires More Resilient IT

The post PagerDuty Report: Stress on IT Teams on the Rise appeared first on IT Business Edge.

]]>
VMware Adds Subscription Option for VMware Cloud https://www.itbusinessedge.com/cloud/vmware-adds-subscription-option-for-vmware-cloud/ Wed, 31 Mar 2021 15:15:00 +0000 https://www.itbusinessedge.com/?p=138789 VMware has added a subscription option, dubbed VMware Cloud Universal, that makes consuming its offerings simpler for enterprise IT organizations. The new subscription option also offers a console that allows IT teams to monitor and manage instances of VMware Cloud running on-premises or in the cloud. VMware has also unveiled a new offering, the VMware […]

The post VMware Adds Subscription Option for VMware Cloud appeared first on IT Business Edge.

]]>
VMware has added a subscription option, dubbed VMware Cloud Universal, that makes consuming its offerings simpler for enterprise IT organizations. The new subscription option also offers a console that allows IT teams to monitor and manage instances of VMware Cloud running on-premises or in the cloud.

VMware has also unveiled a new offering, the VMware App Navigator, which is a set of professional services that VMware engineers can use to assess workloads that might be ready to move to the cloud based on the value of the application. 

VMware claims 300,000 organizations have built and deployed more than 85 million workloads on VMware. However, not all of those workloads make use of the various offerings that are included in the VMware Cloud suite. Large numbers of instances of VMware virtual machines and the workloads that run on them in an on-premises IT environments are managed using tools and frameworks provided by other vendors.

Also read: NVIDIA, VMware Create the AI-Ready Enterprise Platform at Cloud Scale

VMware’s Awakening

VMware in the last few years has been narrowing that gap. The challenge VMware faces is that many of those workloads are now moving into public clouds. VMware is making a case for more seamlessly moving applications to public clouds that run VMware software. However, all the major cloud providers also provide services based on open source software that is less costly than VMware Cloud. Many enterprise IT organizations are opting to refactor applications to run natively on virtual machines provided by Amazon Web Services (AWS), Microsoft and others.

The launch of VMware App Navigator is an effort to exercise more influence over those decisions. VMware is also clearly hoping IT organizations will be willing to pay for that advice. VMware App Navigator requires VMware personnel to assess application workloads and then make a recommendation, notes Dormain Drewitz, head of product marketing and content strategy for VMware Tanzu, the Kubernetes-based platform that is part of the VMware Cloud portfolio. “It’s a service engagement,” she said.

Also read: Falling Cloud Storage Costs Mask Growing Management Headache

Too Much, Too Late?

Third-party IT services providers have been making those types of assessments on behalf of customers for years now. VMware may not have always fully appreciated those recommendations, but there are not many organizations at this point that have not at least evaluated moving existing applications to the cloud. There may be a greater sense of urgency about making that shift now in the wake of the economic downturn brought on by the COVID-19 pandemic. However, the bulk of workloads continue to run in on-premises IT environments for many reasons other than cost.

It’s still not certain how relevant VMware will be in the age of the cloud. VMware will always be a force to be reckoned with, but the days when it dominated on-premises IT are all but over, especially as new classes of applications based on microservices that can run on any instance of Kubernetes emerge. VMware is making a strong case for Tanzu as part of a portfolio that can run both existing monolithic and microservices-based applications, but many IT organizations have already decided to head off in multiple different directions.

A subscription to VMware Cloud might lock more customers into the VMware portfolio in a way that is ultimately more affordable than it is today. It’s just that it might be a case of too much now being offered too late.

Read next: Oracle Adds Free Cloud Migration Services

The post VMware Adds Subscription Option for VMware Cloud appeared first on IT Business Edge.

]]>
AMD Flexes Server Processor Muscle https://www.itbusinessedge.com/servers/amd-flexes-server-processor-muscle/ Tue, 16 Mar 2021 18:55:08 +0000 https://www.itbusinessedge.com/?p=138735 AMD increased the pressure it’s been applying to archrival Intel with the unveiling of an AMD EPYC 7003 Series that includes an AMD EPYC 7763 processor. The company claims it is the fastest server processor available. Each member of the AMD EPYC 7003 Series has 64 “Zen 3” cores per processor with up to 32MB […]

The post AMD Flexes Server Processor Muscle appeared first on IT Business Edge.

]]>
AMD increased the pressure it’s been applying to archrival Intel with the unveiling of an AMD EPYC 7003 Series that includes an AMD EPYC 7763 processor. The company claims it is the fastest server processor available.

Each member of the AMD EPYC 7003 Series has 64 “Zen 3” cores per processor with up to 32MB of per-core cache memory. The processors also support the Peripheral Component Interconnect (PCI) Express 4.0 expansion bus standard, which promises to double the overall throughput available over the existing PCI Express 3.0 bus.

AMD claims its third-generation of AMD EPYC 7003 Series processors offer the highest core density and twice the integer performance compared to competitors, while also improving transactional database processing by up to 19% and Big Data analytic sorts by up to 60% to provide 61% better price/performance than its primary x86 rival.

The AMD EPYC 7003 Series processors also expands AMD Infinity Guard security capabilities to now include a Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) to thwart malicious hypervisor-based attacks by creating an isolated execution environment in memory.

Amazon Web Services (AWS), Microsoft, Google and Oracle all pledged to provide cloud services based on the latest AMD EPYC processor while Dell Technologies, Hewlett-Packard Enterprise (HPE), Cisco, and Lenovo also launched on-premises platforms. Lenovo, for example, added 10 Lenovo ThinkSystem Servers and ThinkAgile Hyperconverged Infrastructure (HCI) platforms built on the latest EPYC processors that it claims achieve 25 new world records across a broad set of industry-standard benchmarks. 

That capability not only enables Lenovo to improve overall price/performance the security capabilities also enable the company to help lower the total cost of ownership, noted Kamran Amini. “It’s easier from a lifecycle management perspective,” says Amini.

Also read: Tetrate and the Emergence of the Intercloud Hypervisor

AMD’s Enterprise Profile on the Rise

AMD overall is currently in a much better financial position than in recent memory. For fiscal 2020 it recently reported revenue of $9.76 billion with an operating income of $1.37 billion. It’s hard to say how much of that revenue is being driven by sales of enterprise servers and client systems, but the days when AMD was primarily used as only a stalking horse to encourage Intel to cut prices may be over.

Less apparent, of course, is to what degree AMD processors will be consumed in the cloud versus on-premises. The bulk of data may stay on-premises but it’s clear the rate at which new workloads are being deployed in the cloud is more rapid. It’s also worth noting the rivals such as Nvidia have made significant gains in terms of processing massive amounts of data. For example, Informatica recently announced it is making Nvidia processors available in the cloud via a serverless computing framework running on a public cloud.

Regardless of the path forward, it’s clear AMD is now a force to be reckoned with in the enterprise. The challenge now will be figuring out what the right mix of AMD-based servers running on-premises and in the cloud should be alongside existing x86 servers from Intel, GPUs from Nvidia, and eventually Arm-based platforms running at the edge and in the cloud.

Read next: NVIDIA, VMware Create the AI-Ready Enterprise Platform at Cloud Scale

The post AMD Flexes Server Processor Muscle appeared first on IT Business Edge.

]]>
Tetrate and the Emergence of the Intercloud Hypervisor https://www.itbusinessedge.com/servers/tetrate-and-the-emergence-of-the-intercloud-hypervisor/ Thu, 21 Jan 2021 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/tetrate-and-the-emergence-of-the-intercloud-hypervisor/ Occasionally, I run into an interesting company at the beginning of a critical trend. When I got a call from Tetrate PR that described this young firm, which was initially funded by companies like Dell and Intel, it reminded me of VMware and hypervisors. Hypervisors were initially created to deal with software problems that arose […]

The post Tetrate and the Emergence of the Intercloud Hypervisor appeared first on IT Business Edge.

]]>

Occasionally, I run into an interesting company at the beginning of a critical trend. When I got a call from Tetrate PR that described this young firm, which was initially funded by companies like Dell and Intel, it reminded me of VMware and hypervisors. Hypervisors were initially created to deal with software problems that arose when Intel iterated its platform. The initial hypervisors allowed existing software to run on new Intel hardware without being rewritten. This relationship between hypervisors and hardware made Pat Gelsinger’s move from Intel to VMware and then back to Intel ironic.

A similar problem has emerged in the multicloud world, but it has more to do with needing software to run on multiple cloud platforms to assure redundancy and uptime. As we have seen, almost any cloud provider can have a catastrophic outage. Maintaining on-premises resources to handle such an outage is wicked expensive; having another cloud provider on hot standby is far cheaper. But each cloud provider is different, which means that applications that run well on one cloud provider won’t run well on another unless altered or rewritten. What Tetrate has, and likely why both Intel and Dell invested in them, is like a hypervisor for the cloud, allowing applications to move freely, secured, and effectively managed across cloud providers. Call it an entirely different kind of hybrid HPC computing solution where rather than shifting between on-premise and the cloud, you shift between cloud providers as needed.

The Idea Came From the Cloud

It is fascinating to note that this solution, which Tetrate calls a “Service Mesh”, grew out of what cloud providers have to do themselves. Often, they use various hardware (some they may specify themselves) and providers to both optimize their environment for their customers and benefit from competitive bidding for the related servers. They employ a control, management, security, and compatibility layer that allows them to move customer workloads to different sites, different hardware, and even different parts of the world as needed.

For lack of a better term, middleware or cloud hypervisor, is typically proprietary to the cloud provider because it equates to either a competitive advantage or table stakes for the service. There is no real interest in licensing it out because people would use it to either migrate to a competitor as needed, or bring in lower-cost providers reducing the potential revenue for the cloud provider that shared the technology.

It isn’t that providers don’t recognize a need. Whenever there is a significant cloud outage, that need is self-evident, but the financial risk of sharing the technology exceeds the individual benefit. Granted, this is a legacy that came before open source rose in popularity — called lock-in — but it still exists in executives’ minds today.

Tetrate’s Service Mesh

Tetrate took this portability concept and turned it into a subscription offering that provides traffic control, assures reliability and uptime by using redundant cloud providers, reduces latency and error rates in cloud operations, and secures via pervasive encryption the result.

One of the significant benefits is that developers in a Tetrate shop don’t have to worry about these elements as they are handled and managed by the Tetrate layer. This result is developers working with a hypervisor don’t have to worry about hardware inconsistencies because that hypervisor handles those. This outcome helps with operations and compliance because you don’t have to modify the compliant code to run on a different platform and put that compliance at risk.

Some of the biggest names in banking and micro-services like FICO, Platform One, and Square have become huge fans of this technology, which assures IT managers that a problem is not due to the platform if it results in incompatibilities or catastrophic cloud outages.

One other exciting aspect of Tetrate is that it was designed as a post-pandemic company in that virtually everyone is remote. They have an office in Milpitas, California, for meetings, but employees are worldwide, including in the U.S., India, China, Indonesia, New Zealand, and Spain (at the time of this briefing, they were interviewing a new employee who lives in Israel).  This position gives them far better access to the best talent because they don’t require employees to relocate.

The Emergence  of an Inter-Cloud Hypervisor

Tetrate is at the front end of what should be a similar market opportunity to Hypervisors but, instead of providing portability across hardware platforms, Tetrate provides portability between cloud providers.   Customers who use them in production praise this portability but also the reduction in management overhead and downtime. Security is also a central selling point as the solution directly mitigated the API exposure, which is forecast to be 90% of Gartner’s attack surface in a few years.

In the end, I can see why Dell and Intel invested in this company. Like VMware did in its day, they address a critical new problem in what is rapidly becoming a cloud computing world.   

The post Tetrate and the Emergence of the Intercloud Hypervisor appeared first on IT Business Edge.

]]>
NVIDIA and ARM: A Potential Game Changer https://www.itbusinessedge.com/servers/nvidia-and-arm-a-potential-game-changer/ Thu, 17 Sep 2020 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/nvidia-and-arm-a-potential-game-changer/ The merger of NVIDIA and ARM announced this week will change both the CPU and GPU landscape dramatically and have a material impact on AI capabilities by the end of 2021. It will change the competitive landscape for PCs and servers while creating some new opportunities and risks in the smartphone space. It can also […]

The post NVIDIA and ARM: A Potential Game Changer appeared first on IT Business Edge.

]]>

The merger of NVIDIA and ARM announced this week will change both the CPU and GPU landscape dramatically and have a material impact on AI capabilities by the end of 2021. It will change the competitive landscape for PCs and servers while creating some new opportunities and risks in the smartphone space. It can also impact everything from robotics and autonomous vehicles to how OEMs buy core technology.

Let’s explore these points this week.

NVIDIA + ARM: Regulatory Hurdles

The U.S. should approve this merger with few reservations, given that it’s a U.S. company acquiring a firm that was based overseas. However, the EU and China may have issues. The EU may consider ARM a European asset and want conditions that assure that jobs and headquarters remain in Europe for the acquired firm. NVIDIA has already indicated this is in the plan, but the EU may require that NVIDIA formally commit to this before approval. I don’t see a problem with that request given this is the stated plan, so the EU’s approval is all but inevitable.

China, which is in a trade war with the U.S., may feel that this merger puts them at a disadvantage and have their unique conditions about control and ownership. Getting China’s approval, therefore, will be more difficult because they’ll want assurances that the U.S. Government doesn’t somehow gain inordinate influence over the technology. NVIDIA has no control over the government and must comply with U.S. rules and regulations. This concern will be harder to overcome because of the lack of trust between China and the U.S. and will make coming up with enforceable conditions problematic. And this is an election year in the U.S., meaning that getting anything done will be problematic, mainly if it means leaving the ARM jobs in Europe. I don’t expect this problem to be insurmountable, but NVIDIA’s merger team is likely to find most of their effort tied to getting China’s eventual approval.

Changed Competitive Dynamics

Up until this merger, there were three comprehensive core technology component vendors for personal computers and smartphones. They were Qualcomm (much heavier in smartphones than PCs), AMD (PCs and game systems), and Intel (PCs only). With this merger, NVIDIA joins this group, and all of these competitors have efforts that span PCs and servers, particularly for AI. Most of them have interests in robotics, autonomous cars, and IoT platforms as well.

The interesting dynamic will be between Qualcomm and NVIDIA given that Qualcomm is a licensing entity — and NVIDIA, with ARM, becomes one as well. NVIDIA has indicated they will pivot to more of the licensing model over time for a broader cross-section of offerings, and Qualcomm is a significant ARM licensee. Except for their AI work, NVIDIA and Qualcomm don’t compete with each other that much and will effectively be partners, under the ARM licenses, once this merger is complete.

Now the two firms will have an exciting choice: strengthen the partnership, make it strategic and leave things as they are, or Qualcomm abandons ARM and creates the CPU technology that they license. The first option may have some antitrust exposure but could result in some rather interesting joint opportunities. The second option, which is also the most likely, would require NVIDIA not to abuse their power and Qualcomm to be judicious concerning how they share their technology with ARM (this would be similar to how car companies often get engines from competitors).

The final option would be wicked expensive for both firms and likely weaken Qualcomm significantly before, and if, their new CPU technology got to critical mass in the market. Since Qualcomm is a licensing entity and knows how to manage these relationships, I expect they are comfortable continuing to license from the merged NVIDIA/ARM entity. Given Qualcomm’s recent challenge to their licensing model by the FTC, having another large company using a similar licensing model emerge should reduce Qualcomm’s exposure in this area long term. Qualcomm’s recent appellate court win also did much the same thing so the benefit to them isn’t as great as it would have been pre-appellate judgment.

Intel and AMD

Now, this merger potentially places ARM and X86 against each other broadly while transitioning NVIDIA into a licensing powerhouse. AMD’s custom business and lack of a fab arguably makes them a licensing-like entity and places Intel as the odd man out in terms of business model. If Intel allows broad licensing of X86, it will reduce the justification for Intel to own its FABs and force one of the most significant changes to its business model in history. If they hold, they will increasingly be standing alone, and while, at their size, they can certainly do that, with their general support for open technologies, resisting the move to more of a licensing model will be difficult.

If Intel does more aggressively, license firms will be given a choice between licensing ARM and licensing X86. If the PC and server OEMs then choose to license, Intel will either have to open their FABs to this business, or they likely won’t sustain volume at a high enough level to support those FABs. As a result, this merger may impact Intel the most. But it could merely accelerate Intel down a path they were already following.

AMD has been an ARM licensee in the past and is currently the largest X86 licensee. As noted, their custom business does already put them on an exciting path, but they don’t own and can’t license X86 to others since they are under license from Intel restricting that. However, they could license their additive X86 technology to the OEMs to enhance the offerings the OEMs produce, as these OEMs compete with their peers. This change would open up new licensing opportunities for AMD but, like Intel, reduce the market opportunity for AMD’s parts.

Wrapping Up: The Move To Licensing

Overall I think this merger will force a broad move from producing core parts to licensing the technology to produce their own. We already have cloud vendors working to produce their core components, often licensed from ARM. If this merger forces Intel and AMD to more aggressive licensing, it could both accelerate this trend and offset these efforts that currently don’t use X86 technology, making it more competitive as a licensed technology.

One thing is certain, though; this merger is going to change the market for core technology products across several segments, including cloud, PC, smartphone, AI, IoT, robotics, autonomous vehicles, and security. I doubt there will be a single tech company that isn’t significantly impacted by the resulting changes.

Rob Enderle has been a TechnologyAdvice columnist since 2003. His areas of interest include AI, autonomous driving, drones, personal technology, emerging technology, regulation, litigation, M&E, and technology in politics. He has an AS, BS, and MBA in merchandising, human resources, marketing, and computer science. Enderle is currently president and principal analyst of the Enderle Group, a consultancy that serves the technology industry. He formerly worked at IBM and served as a senior research fellow at Giga Information Group and Forrester.

 

The post NVIDIA and ARM: A Potential Game Changer appeared first on IT Business Edge.

]]>
IBM Completes Red Hat Acquisition | What Will Change? https://www.itbusinessedge.com/servers/ibm-completes-red-hat-acquisition/ Thu, 11 Jul 2019 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/the-interesting-parts-of-ibms-completion-of-the-red-hat-acquisition/ I was part of a failed acquisition by IBM costing the company billions, so it is particularly fascinating to watch the execution of Red Hat by the company. If IBM had used the same process with similar promises they used to acquire ROLM Systems I could forecast a dire outcome for this effort. However, many […]

The post IBM Completes Red Hat Acquisition | What Will Change? appeared first on IT Business Edge.

]]>

I was part of a failed acquisition by IBM costing the company billions, so it is particularly fascinating to watch the execution of Red Hat by the company. If IBM had used the same process with similar promises they used to acquire ROLM Systems I could forecast a dire outcome for this effort. However, many people were so frustrated by IBM’s acquisition process they rewrote it and created an industry leading model that Dell eventually adopted and then improved. IBM appears to be using this improved model with this Red Hat acquisition, basically closing the circle. Execution should result in an organizational structure like VMware at Dell Technology and that will have a number of interesting implications for IBM and Red Hat Customers, employees, and IBM investors.

Let’s go through some of the expected changes.

Friction Reduction

Now if you think of this less like a typical merger where the two firms are smashed against each other with massive damage to the acquired company and more like a super partnership, you’ll be closer to where this will likely end up. Material movement, intellectual property, products, and employees will find it easier to move between the two firms than with any pre-existing Red Hat or IBM partner.

This is because where you are likely to see synergies and mergers between the two firms are in common services. It is also normal for this process for the two firms to treat each other like favored vendors with reduced approval processes, advantageous intercompany charges (prices), and because the HR systems will likely be aligned, a far simplified process to move between the firms.

Now another change in IBM since the ROLM acquisition that was such a disaster is that IBM is no longer an employer for life which turned IBM acquisitions into dumping grounds for under performing employees. Since it is now more like other firms operating under employment at will (terminating an underperforming employee is relatively easy) the foundation for this bad behavior has been removed. This doesn’t mean an IBM manager can’t still trick an Red Hat manager into taking a problem employee but it reduces the desire to do so to an extent that it just doesn’t make sense to go through this effort and the repercussions for the IBM manager that gets caught doing this would likely be dire.

Red Hat’s Reach Expands Exponentially

Red Hat is a relatively small firm, IBM is not. With this acquisition Red Hat gains better access to IBMs global service and sales resources allowing the firm to better compete globally. IBM’s market research also significantly exceeds what Red Hat can afford and this data and resource will be available to Red Hat so they can better target future development offerings on enterprise needs and create messaging that is more effective when marketing their existing offerings. So, they gain both reach in sales and services and reach in terms of market information pretty much assuring a strong upside to their global sales.

Artificial Intelligence

IBM is arguably the leader in AI at enterprise scale with Watson and, increasingly, the company is using this tool internally to enhance decision making. This isn’t an inexpensive effort, but it is a potential huge competitive advantage for the IBM executives that have access to it. I expect Red Hat will get access to this tool shortly and be able to use it to enhance their decision-making process improving their market performance. IBM, in turn, will gain information on the application of Watson in Red Hat and, given their new intimacy with the company, should be able to use this access to better refine Watson going forward.

Doubling Down On Open: The True Oracle Counterpunch

Up through the 1980s IBM was the posterchild for the Lock-In strategy so popular back then. But, after this policy nearly put them out of business, they pivoted to Open with a vengeance and with this acquisition effectively are doubling down on this bet. IT shops are rapidly updating and modernizing their infrastructure to better complete in this ever more digital world and have massively favored the “Open” strategy as well which has become the new industry standard (someone should really tell Oracle). This need has migrated to the cloud in spades and Cloud efforts that don’t embrace the Open concept

With this acquisition IBM is signaling that Open will remain the firm’s strategic imperative.

Wrapping Up: Doing Mergers Right

Like a lot of you I’ve been through a lot of bad mergers and the pain to employees, customers, and investors is avoidable. The process IBM is using should assure that this merger is one of the few good ones and I hope other companies that more typically use the destructive smash together processes of the past will learn from this. IBM’s acquisition of Red Hat should make both firms stronger, provide better options for employees and customers, and even IBM investors should eventually be pleased with the result. There is a right way and a wrong way to do most things, it took a lot of pain to get here, but IBM has embraced the right way now if we can just get most firms to do the same.

The post IBM Completes Red Hat Acquisition | What Will Change? appeared first on IT Business Edge.

]]>
IBM Power9 in the Cloud Offers Advantages https://www.itbusinessedge.com/servers/ibm-power9-cloud-advantages/ Fri, 21 Jun 2019 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/ibm-power9-cloud-how-the-cloud-is-enabling-competition-and-legacy-support/ One of the big problems in competing with a dominant vendor is just getting IT to try out your technology. Few are willing to risk trying something new if what they are getting from the dominant vendor works. But the cloud is creating opportunities for challengers that we haven’t seen before. Case in point is […]

The post IBM Power9 in the Cloud Offers Advantages appeared first on IT Business Edge.

]]>

One of the big problems in competing with a dominant vendor is just getting IT to try out your technology. Few are willing to risk trying something new if what they are getting from the dominant vendor works. But the cloud is creating opportunities for challengers that we haven’t seen before. Case in point is this announcement from IBM. In this they talk about putting Power9 into the cloud predominantly so they can run legacy applications, but this could also work to showcase that the advantages IBM has been talking about for years are real.

While the move to put applications into the cloud is arguably one of the biggest trends this decade, much of the activity is with relatively new platforms and applications that were designed with this in mind. There are millions of legacy applications companies aren’t willing to recreate running on AIX and IBM I platforms that could enjoy the same benefits, but these were never designed for that environment and that need has largely gone unmet until now. This announcement provides customers an opportunity to run their legacy AIX or IBM i instances in the IBM Cloud gaining the same cost advantages as newer applications.

Let’s start with the legacy applications part of this and then we’ll move to competitive displacement.

The Problem of Legacy Applications

One of the big problems plaguing the industry is that there are a lot of mission critical applications which were written decades ago and that continue to provide value but not enough value to recreate them. These old applications also tend to be reliable, have a huge number of dependencies on other applications, and the people that understood them are long gone making recreating them or even updating them problematic.

As firms move to either a cloud or a hybrid-cloud models dealing with these applications has been problematic because they run on legacy platforms and were created long before we coined the word “cloud”. Rewriting them has proven very expensive, difficult, and exceedingly risky because the people that created the apps are long gone and a failure, due to the high number of dependencies, could be catastrophic.

But, with this, at least in theory, the legacy applications don’t have to be rewritten or replaced, they can be hosted in the IBM Cloud and the IT shop can continue to run these aging applications indefinitely.

Targeted applications would largely be database and financial apps which have massively long lifetimes often exceeding the lives of the firms that created them. While they eventually will need to be updated or replaced this takes a lot of the pressure off for doing it near term and allows the IT shop to prioritize other more critical near-term projects instead.

Competitive Advantage

Power9 has a number of advantages over the X86 platform which is now dominant. But getting people to both buy non-standard hardware and port their applications to it is problematic. With Virtualization you can run x86 instances on a Power9 platform, but the offsetting performance loss likely wouldn’t make it worth it. But, for those that need the extra performance (there is a multiplier here I’ll get to shortly) and security they now have that option. Now I mentioned a multiplier, Intel has been having serious security problems both in terms of actual potential exploits and in terms of trimly alerts. On this last it has been taking up to 12 months for Intel to notify about a known exposure to their hardware. When they do notify, they issue patches, but these patches tend to disable features and/or reduce performance. Neither Power9 nor AMD has reported the same level of exposure and both platforms have mostly been immune to the latest two groups of exploits.

This is one of the reasons why there are AMD instances popping up in cloud offerings as well. But, with this announcement, with very little risk a company can try out both platforms without incurring the cost of additional hardware and avoid the security and performance exposures plaguing those running on Intel. It strikes me that any cloud provider that was running AMD and IBM Power9 exclusively might have a very strong security advantage in this increasingly unsafe world.

Wrapping Up

IBM’s Power9 announcement seemed to initially be about hosting legacy applications and, while important, I don’t think it is as powerful as the ability to try out Power9 instances in the cloud and see if the performance is adequate while enjoying the increased security. I think this announcement is a god send not only for those struggling with legacy AIX and IBM i applications.

I wonder how long it will be before someone realizes this same approach could be used for a brand-new architecture, one that hasn’t yet been put in the market. I think it is just a matter of time till a brand-new microprocessor architecture comes to market. It’s about time we had a little disruption. Until then recognize you have options in the cloud, if legacy apps, performance, and security are important to you this IBM announcement may either be a new option or an indication of even bigger changes to come. 

The post IBM Power9 in the Cloud Offers Advantages appeared first on IT Business Edge.

]]>
Why HCI Is Critical for Digital Transformation https://www.itbusinessedge.com/servers/why-hci-is-critical-for-digital-transformation/ Fri, 11 Jan 2019 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/why-hci-is-critical-for-digital-transformation/ Hyperconverged infrastructure (HCI) is more than just a convenient way to streamline today’s complex, silo-laden data center – it also provides the foundation for the transformation to a digital services business model that is crucial to success in a rapidly evolving economy. Digital services, the kind that Uber and Airbnb are using to disrupt longstanding […]

The post Why HCI Is Critical for Digital Transformation appeared first on IT Business Edge.

]]>

Hyperconverged infrastructure (HCI) is more than just a convenient way to streamline today’s complex, silo-laden data center – it also provides the foundation for the transformation to a digital services business model that is crucial to success in a rapidly evolving economy.

Digital services, the kind that Uber and Airbnb are using to disrupt longstanding industries like transportation and hospitality, require a highly flexible and scalable data infrastructure. While virtualization has done wonders for traditional hardware platforms in this regard, the fact is that even fully virtualized physical infrastructure is expensive, difficult to build, and requires highly specialized training to manage and optimize.

HCI not only offers the promise of a vastly simplified physical plane, both in the initial deployment and as an ongoing operational construct, it also sits on a vastly streamlined footprint, with some solutions capable of packing an entire data center’s resources into a few square meters.

Small wonder, then, that many enterprise executives are balking at the prospect of shoe-horning digital transformation initiatives into legacy infrastructure and are launching new services on greenfield HCI platforms instead. Paul Nashawaty, product marketing strategist at backup and recovery specialist HCYU Inc., notes that HCI simplifies all of the key phases of digital transformation, from the initial data migration to integrating file and block services and linking to cloud-based B&R services. In this way, all digital services gain access to all available data under a unified system that is scalable, resilient, and easy to maintain.

Small and medium-sized businesses (SMBs) are likely to benefit extremely well with HCI. For one thing, says BizTech Magazine’s Juliet Van Wagenen, it provides massive scale without the cost of a dedicated IT team, which levels the playing field with larger, well-heeled competitors. As well, it supports management automation, DevOps and a host of other capabilities that drive digital transformation. For these and other reasons, TechAisle Research is calling for SMB investment into HCI to double by 2020 as they pursue the same top-line platforms by Nutanix, Cisco and HPE that are currently making their way into the data ecosystems of top-tier enterprises.

In many ways, however, HCI’s benefit to digital transformation is not so much its scalability or its management simplicity, but its speed. As Tech Central’s Jason Walsh learned after talking to multiple HCI experts in the field, opportunities rise and fall in the blink of an eye in a digital economy so the underlying data infrastructure must have the ability to be deployed, upgraded and reconfigured quickly. With HCI eschewing much of the configuration and integration processes of traditional infrastructure in favor of a modularized plug-and-play model, enterprises gain an unprecedented ability to react, and even proact, in highly dynamic business environments.

While there are bound to be start-up organizations that adopt HCI from the very beginning, most established enterprises will likely deal with hybrid solutions mixing traditional, converged and hyperconverged infrastructure both within the data center and on the cloud. Eventually, however, it is wholly reasonable to expect HCI to become the de facto standard for IT.

Most enterprises have put up with expensive, complex and ultimately low-performing resources for decades because there was simply no other way to do it. Now that a faster, cheaper and more elegant solution has arrived, there is very little reason to keep building and maintaining infrastructure the hard way.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

The post Why HCI Is Critical for Digital Transformation appeared first on IT Business Edge.

]]>