CX: The CFO’s New Best Friend

CX: The CFO's New Best Friend

Although it’s starting to become a well-worn aphorism, “data is the new oil” resonates more than ever. Like oil, data is an abundant resource, but it doesn’t become useful until it is refined for use and turned into fuel.

Without the proper refinement, big data may be worthless. The stock of big data unicorn Palantir, for example, sunk on news that it lost key client relationships due to a lack of perceived value. The company collected abundant data from CPG companies but was unable to apply it to practical use cases, according to a recent article.

Marketers are starting to turn away from using abundant, yet commoditized, third-party data sources in exchanges and move toward creating peer-to-peer data relationships and leveraging second-party data for targeting. This speaks to the refinement of targeting data: Better quality in the raw materials always yields more potent fuel for performance. Not all data is the same, and not every technology platform can spin data straw into gold.

Marketers have been using available data for addressable marketing for years, but now are starting to mine their own data and get value from the information they collect from registrations, mobile applications, media performance and site visitation. Data management platforms (DMPs) are helping them collect, refine, normalize and associate their disparate first-party data with actual people for targeting.

This is a beautiful thing. Technology is enabling marketers to mine their own data and own it. Yet many marketers are still just scraping the surface of what they can do, and using data primarily for the targeting of addressable media.

Some, however, are starting to deliver customer experiences that go beyond targeting display advertising by using data to shape the way consumers interact with their brands beyond media.

The case for personalization – customer experience management, or CX – is palpable.

Read the source article at AdExchanger
Original Author: adexchanger

Making a Happy Marriage of WebSphere & TIBCO Infrastructures (Part 5 of 6)

Monitoring Message Flow - Middleware transactions (Fifth of a six-part blog series that describes how a team of IT pros and managers at one of the world’s largest global banks accommodated a bank acquisition and mastered a complex messaging environment.) Continue reading

Simulation: The ‘Aha’ Moment for DevOps Adoption

Simulation: The ‘Aha’ Moment for DevOps Adoption

Every IT transformation requires that people, process and technology align and work together to achieve success. As organizations begin to appreciate the benefits of a DevOps approach and look to change to a DevOps model of software development and release, there will be significant pressure on people, process and technology to support that change properly.

The technology and process changes required to enable DevOps to work efficiently shouldn’t be underestimated. Considerable time and investment is required in embracing the automation technologies that support a proper DevOps environment, and in developing the processes that make best use of that technology.

But DevOps cannot work unless your people are ready to change. Development teams and IT operations—who traditionally have worked in silos—must work together, integrate seamlessly and understand not only their roles and responsibilities within the new environment but also the roles and responsibilities of those around them.

And therefore, like every change, the real key to success is your people.

Creating Successful Change

The benefits of taking a DevOps approach are numerous: By moving toward more frequent delivery with reduced elements of change, you lower risk and increase system stability. Although automation is costly to implement, it allows teams to work far more efficiently, increases reuse, reduces errors and brings development teams closer to the customer. New functionality released continually and rapidly can be validated in the market, feedback can be received, lessons can be learned and improvements implemented quickly.

 

 

Read the source article at devops.com
Original Author: Contributor

Leveraging the Power of DataPower Appliances in the Enterprise – Part 2

Monitoring DataPoweClick to see the full-size infographic, 3 ways to instrument DataPowerr appliances

DataPower appliances have several management interfaces that provide detailed information about system health, operations and performance. These metrics can be monitored through several management protocols, including: SNMP, WSDM (Web Services Distributed Management), logging and other XML-based API’s.

Authorized management and monitoring tools can subscribe to information about the appliance using these protocols, allowing administrators to access and capture a very broad range of configuration and status data, including: Continue reading

Embracing a DevOps Culture, Part 2

Embracing a DevOps Culture, Part 2

Embracing a DevOps Culture,  Part 2:

IT Communication: The Glue That Holds Dev and Ops

To be successful, organizations need to be able to respond as quickly as possible during a major IT incident to limit the negative impact on the business. This is achieved when Dev and Ops works together to create a central communication and collaboration center to ensure proper communication with key stakeholders.

An organization’s ability to effectively mitigate the impact of an IT issue relies heavily on its ability to access and communicate critical information, and to ensure the right people can analyze it and initiate the appropriate actions to keep the business running smoothly. The communication hub equipped with an IT communication solution will facilitate:

  • Reaching out to the right on-call people among all the different teams: infrastructure, server, system administration, middleware, network, DBA, QA, support team, service desk and the application developers.
  • Because emails don’t wake up people, they will be able to leverage the IT communication solution to reach out to people via multiple channels until they respond (voice, SMS, email, push notification app, paging, etc.), or automatically escalate to the next resolver on the on-call list.
  • Providing the right information so the IT resolvers start investigating the issue, identify the root cause and put a resolution plan together without wasting time.
  • Get people to collaborate together using the same telecommunication and collaboration tools whatever the time zone and wherever they may be located.
  • Contacting a third-party vendor in case the problem is not attributable to the company but caused by an external piece of software.
  • Informing the other departments if the business impact grows so big that it affects the company profitability or reputation, such as a cyberattack leading to a data breach for instance. The CEO, legal and marketing may need to be informed to anticipate the consequences.
  • Informing end users or customers to limit the number of incoming calls into the help desk.

The benefits of quick collaboration and not just handoff between the teams don’t even have to be demonstrated. Removing the broken pathways, obsolete delivery methods and redundant platforms will improve significantly the organization’s ability to communicate during critical moments. Becoming more efficient and removing the time wasted will automatically have a huge impact on the mean time to know (MTTK) and, in turn, on the mean to resolve (MTTR), hence minimizing the disastrous impacts on the business and keeping the company’s execs sleep at night.

Read the source article at devops.com
Original Author: Contributor

5 Prerequisites for a Successful DevOps Initiative

5 Prerequisites for a Successful DevOps Initiative

There are certain tools and attitudes that are essential to any successful DevOps initiative.

The manner in which a DevOps transformation is performed will have huge implications on the level of agility the company is able to achieve. However, before that point is even reached, some vital foundations must be laid. It is from that foundation that any innovation occurs. With that in mind, each of these elements are equally important, and should be regarded as such.

  • Consensus
  • Flexibility
  • Automation
  • Cooperation
  • Rethink Architecture

Read the source article at devops.com
Original Author: Michael Schmidt

How I Will Use Big Data in My Presidential Campaign

How I Will Use Big Data in My Presidential Campaign

The growing importance of big data in presidential elections is no secret, as it jumped into the spotlight in 2008, when then-Senator Barack Obama invested significant resources into analyzing big data, which helped propel him to the White House.

At the same time, Nate Silver used big data to project Obama’s presidential victory with astounding accuracy, correctly predicting the outcomes in 49 of the 50 states in the 2008 presidential election.

Silver did even better in the 2012 United States presidential election, successfully predicting the winner in all 50 states and the District of Columbia.Given the growing importance of big data in the presidential election, I thought it only natural that I should consider throwing my hat into the ring and run for president.

Read the source article at Data Informed
Original Author: Bill Schmarzo

Making a Happy Marriage of WebSphere & TIBCO Infrastructures (Part 4 of 6)

TIBCO RV and EMS Metrics

TIBCO Metrics

(Fourth of a six-part blog series that describes how a team of IT pros and managers at one of the world’s largest global banks accommodated a bank acquisition and mastered a complex messaging environment.)

A major driver of MegaBank’s decision to standardize on Nastel AutoPilot was its ability to monitor all of its middleware systems. With AutoPilot’s TIBCO EMS plug-in, the IT team could easily monitor TIBCO EMS servers and components including: queues, topics, consumers, producers from a single, consolidated vantage point.

 

Continue reading

Scaling Collaboration in DevOps

Scaling Collaboration in DevOps

Those familiar with DevOps generally agree that it is equally as much about culture as it is about technology. There are certainly tools and practices involved in the effective implementation of DevOps, but the foundation of DevOps success is how well teams and individuals collaborate across the enterprise to get things done more rapidly, efficiently and effectively.

Most DevOps platforms and tools are designed with scalability in mind. DevOps environments often run in the cloud and tend to be volatile. It’s important for the software that supports DevOps to be able to scale in real time to address spikes and lulls in demand. The same thing is true for the human element as well, but scaling collaboration is a whole different story.

Collaboration across the enterprise is critical for DevOps success. Great code and development needs to make it over the finish line to production to benefit customers. The challenge organizations face is how to do that seamlessly and with as much speed and automation as possible without sacrificing quality or performance. How can businesses streamline code development and deployment, while maintaining visibility, governance and compliance?

Read the source article at devops.com
Original Author: Tony Bradley

Custom App Analytics with Data Insight

AP Insight screenshot

A sample AutoPilot Insight Dashboard

As developers, we always want to know how people use and experience our apps. We want insights on things like:

  • Errors or crashes
  • What screens are being visited the most, and how much time do visitors spend on each of them?
  • How many sessions do they have each day?

Continue reading