![]() |
VOOZH | about |
With every turn of the yearly calendar, there’s usually more than bells ringing in my ears. It’s usually a little voice telling me to “make your New Year’s resolution.” Maybe you hear the same voice.
And somebody’s always reciting the usual litany:
In the spirit of renewal, of starting over and of getting a make-over, IT Operations needs a New Year’s Resolution. (Actually, more than one).
CA Nimsoft Monitor 7.1 is the next minor release of the product. This release includes:
This release contains usability enhancements, extended IP protocol support, additional language support and a number of product fixes.
Click here to watch a short video demonstration on each of these new features in 7.1.
In 2014, many organizations will implement DCIM for the first time or expand or replace their existing implementation. Several critical factors will help organizations succeed. Here are recommended best practices based on experiences with our customers.
First and foremost, when it comes to enabling users, is choosing a DCIM solution with usability in mind. The software should be role-based and thus meet your users where they are. Seemingly small things, such as allowing for single sign-on access, increase usability. One of the key benefits of DCIM is that its architecture brings together a large, valuable data set. To enable users, you need to know how users can leverage the data themselves. For instance, are they able to create their own metrics, or do new metrics require additional services costs?
Integration is at the core of DCIM. The most common first phase is integration with devices, power and cooling equipment and the BMS, i.e., the physical data center. Ultimately, DCIM can even replace some of the tools previously used to monitor and manage the physical data center.
It is often said one of the attributes of genius in the fields of science and engineering is the ability to make connections between things, resulting in a new understanding of what was unforeseen or explainable. Runel Soria, a writer on genius, gives some good examples: “Da Vinci forced a relationship between the sound of a bell and a stone hitting water. This enabled him to make the connection that sound travels in waves. Samuel Morse invented relay stations for telegraphic signals when observing relay stations for horses.”
The Los Angeles Times reported that the brain that is one of the great geniuses of the last century, Albert Einstein, had significantly more neuronal connections than the average person. This is an astonishing metaphor, if not the physical realization, of Einstein’s ability to make connections (e.g., visualize and analyze) matter, energy and space in ways that no one else did. Voila — genius.
Our understanding the world (the physical and biological environment, politics, economics, human health, the weather, etc.) is greatly enhanced by visualizing things in terms of connections and by creating models. This is often called “Systems Thinking,” which, as I outlined a previous blog on Systems Thinking, extends to IT operational management.
The last week before Christmas could be do or die for both shoppers and merchants as people scramble to make those last minute purchases. Merchants stand to gain if they are prepared to meet the customer’s needs, one of which is a positive on-line shopping experience. Years ago a commonly accepted response time for a web site was around eight seconds, in reality today it’s probably more like four. With so many alternatives for on-line shopping for the same or similar merchandise a poor performing website could cost retailers millions in lost sales. According to Adobe System’s recently released Digital Index Cyber Monday generated $2.29 Billion in sales, of which approximately 42% of was online only transactions with a peak of $150 million in one hour.
With so much at stake it’s no wonder that retailers are doing everything they can to streamline their websites for a better shopping experience from everything through the look-and-feel to speed. Speed alone however isn’t enough. The bigger picture has to focus on availability, performance, and consistency. We’ve been using CA Technologies APM Cloud Monitor solution to monitor more than 50 websites to better understand how well various retailers are doing in those areas. Generally speaking there is good news but there are also some speed bumps in the process for some.
Availability has been good overall, with some sites achieving or approaching 100%. Those that haven’t achieved 100% availability run the risk of losing sales. For example, we’ve observed several sites at 97% availability. That equates to a little over 20 hours of down time, which could be costly if the period of unavailability occurred during peak times. Availability alone, however, isn’t enough of an indicator. A poor performing web site that is 100% available but has long transaction times is likely to find more abandoned shopping carts than a web site with consistently fast transaction times that result in a better overall customer experience.
The business runs on IT. IT is getting more complex. Resources are thin. How do you keep pace?
J.P. Garbani, vice president and principal analyst at Forrester Research, Inc., joined us for a webcast on that very topic titled Fact or Fiction: Traditional APM is Enough To Keep Pace with Growing Complexity. In this webcast, which you can watch below, you’ll learn about:
If you’re drowning in IT management data or find yourself stuck in a Circle of Blame every time there’s an application performance issue, then this webcast is for you.
Everybody’s got an opinion when it comes to which new IT technologies to bank on and the best way to manage them.
With every decade, there comes a new debate. One grand dispute was in the 1990swhen Cisco advocated centralized intelligence at the network core to orchestrate network activity and assure quality of service. In the other corner was 3Com, Cisco’s archrival who advocated intelligence at the network edge. Cisco was king of “big iron” (core network routers and switches). 3Com was king of Ethernet “NICs” (network interface cards) for PCs and servers and was the king of stackable LAN switching for wiring closets.
Another related raging debate was about “big bandwidth.” Because bandwidth was getting so cheap (in part due to Moore’s Law), some pundits argued that “you could just throw bandwidth at the problem and throw out all the emerging techniques for intelligent control.”
This certainly was entertaining. And if you wanted to pick a fight in the press, “bandwidth versus brains” was the right topic.
Below is the latest product update and release information for CA Nimsoft Monitor. This notification includes information on all components that were released during the month of November 2013.
CA Nimsoft Monitor Product Release Update: November 2013
CA Unified Communications Monitor (CA UC Monitor) is CA Technologies answer to monitoring unified communications – what most people think of as VoIP and video. Unique in its ability to monitor multi-vendor environments, including Cisco Unified Communications Manager, Cisco medianet, Microsoft® Lync®, and AVAYA environments, CA UC Monitor has been certified as a Microsoft Partner for Lync in the Apps for IT category.
Lync, previously called Microsoft OCS (Office Communications Server) is a comprehensive unified communications solution offering voice, video, conferencing, IM and presence, the latter being that nifty feature that lets you know whether the person you are trying to reach is offline, in a meeting, available, etc.
As infrastructures expand to incorporate unified communications, the importance of monitoring tools for unified communications expands right along with it. Serving voice and video on the same IP network that runs your other critical data applications creates an environment in which applications battle for finite resources and service levels can decline rapidly. Unified communications introduces real-time applications that require consistent bandwidth services; they are inherently different traffic types, with different quality of service requirements than data applications. This makes monitoring key metrics for these applications even more important in order to maintain service quality.
A recent poll on Business Service Reliability (BSR) found that IT needs to put an increased focus on managing and measuring the customer experience to improve business outcomes.
The poll—conducted by IDG Research Services on behalf of CA Technologies—sought to determine how organizations measure both BSR and the customer experience that IT provides. BSR is CA Technologies approach to helping IT transform by providing a clearly-defined framework for managing and measuring customer interactions.
The majority of respondents (58 percent) are using a combination of surveys and other metrics (e.g. application downtime and call-center volume) to measure the customer experience. Just over one quarter (26 percent) reported that IT delivers an exceptional experience. The majority of respondents (61 percent) classified the customer experience as adequate. The remainder described it as inconsistent – the customer experience meets the expectations of the business some, but not all, of the time.
Some IT organizations think using only “best-of-breed” tools is the best approach for IT monitoring and management. But a recent IDC white paper, sponsored by CA Technologies, shows that this isn’t the case at all—and that such an approach can be ineffective, wasteful and slow. IT is already dealing with environments that are incredibly complex—so having a variety of point solutions to manage those environments only adds to the complexity.
The white paper, entitled “Unified Infrastructure Monitoring and Management Increases Availability, MTTR, and IT Staff Productivity,” (October 2013) reveals how unified monitoring and management solutions that are consistent across systems, networks and applications give IT staff a real advantage over fragmented point solutions.
Unified monitoring enables IT to view cross-domain information that is correlated automatically and presented on a single dashboard, giving them a quick, visual grasp of the infrastructure and current issues. The alternative is manual correlation of information presented in a variety of formats on individual dashboards, resulting in significantly longer time to identify issues and their impact across the infrastructure.
By Jim Metzler
IT organizations need to continually show the value they provide to the company’s business unit managers. Business Unit Managers, however, typically see that the value provided by IT coming primarily from the applications they use to run their respective groups. Hence, effective application performance management is an important way for IT to add value. However, application performance management is a double-edged sword: Do it well and you add business value. Do it badly, and your company runs a great risk of losing revenue.
Research that I recently conducted shows two key facts relative to application performance management. One of those facts is that the majority of times that the performance of an application is beginning to degrade, that degradation is noticed first by the end user and not by the IT organization. The other fact is that the management task that IT organizations are most interested in getting better at is rapidly identifying the cause of degraded application performance.
One of the primary obstacles that prevents IT organizations from getting better at application performance management is the fact that IT organizations have an ocean of management data to analyze. This data comes from the continually growing number of business transactions plus it comes from a complex, distributed IT infrastructure that is both physical and virtual, and which is increasingly provided by third-party cloud computing vendors.
© 2013 CA Technologies, Inc. All Rights Reserved
Powered by WordPress. Designed by 👁 Woo Themes