5 Best Practices to Avoid Data Breaches in the Healthcare Industry

5 Best Practices to Avoid Data Breaches in the Healthcare IndustryData breaches are common and can occur at almost every type of organization or company, but they are particularly troublesome and widespread in the healthcare industry. Patients’ sensitive medical records are constantly at risk, whether the organization is large or small, affecting individuals at every level of data breach.

The U.S. Department of Health and Human Services maintains an online database of healthcare breaches affecting over 500 individuals, but many smaller breaches occur each year as well. According to Forbes, over 112 million records were compromised by data breaches in 2015 alone—and 90% of the top ten breaches were related to hacking or IT incidents.

The average cost of a breach continues to rise, and in 2014, that average stood at $5.9 million. With the high prevalence of cybercrime still rising, the healthcare industry must take steps to reduce the number and impact of data breaches, which lead to the compromise of sensitive data and financial consequences. Healthcare organizations should follow cyber security best practices to minimize the risk of a breach. These steps include:

Educating Employees on Security Risks

Healthcare organizations may have stellar employees, but human error can always lead to security issues. Proper training on regulations, security protocols—and support for employees using mobile devices—can help reduce these errors and improve overall security. Employees should only have the data necessary to perform the functions of their job—the fewer places data is stored, the more secure it is.

Choosing Vendors Carefully

Many healthcare organizations use offsite data storage systems that work with third party vendors who are responsible for the organization’s records. Choosing partners who follow best practices are essential to keeping data safe. When an organization does not have direct control over the data, the security precautions must be just as strict as if the data was stored in-house.

Best Practices are the Best Defense

Unfortunately, it’s not always possible to prevent a data breach. By following best practices, however, healthcare organizations can minimize the risk of a breach and be better equipped to handle a one in the future. Preventing a breach may require quite a bit of preparation, but it can save money in the long run and prevent patients’ sensitive data from falling into the wrong hands.

View Source


What Is The Right Way Of Using Big Data For Business Excellence?

blogpicUsing Big Data for business success is undoubtedly a smart move, but only when implemented orderly. The responsibility for the initiatives and implementation starts with properly understanding & organizing it before bringing in the voluminous data. Since most of the companies have leadership and capabilities vacuum for Big Data and Analytics, therefore, making it smart and efficient always seems a far post for business excellence. So how to get most out of Big Data? What skills and tools are needed? How to protect and store it for better utilization?

Data Should be Smart Enough to Provide Needed Insights!

You have saved and organized the collected data for saving time and efforts later on. But, are you sure the data is capable enough to give you insights and information you are expecting from it?

Apart from collecting data, you must thoroughly check whether you are accumulating the right one or just a noise! Data such as buying habits of consumers, their feedbacks, opinion, changing trends and desires are few examples of worthwhile data. This will help you make better and actionable decisions.

In this fast data-driven world, you must act fast as well as accurate. Otherwise, you will just end up with junk nothing more. Therefore, you must competently handle velocity, volume, and variety of the data for making this approach bankable for your business. Make sure that you are collecting the required data and not just the volume. This will save you from the challenges that might arise when you begin sifting through it.

View Source




Latency Numbers Everyone Should Know

latencyA Checklist Providing Atomic Latency Numbers

Check out the graphic from DZone, “Latency Numbers Everyone Should Know”.  The graphic is a chart showing common operations, an explanation of each operation and how long these operations typically take on commodity hardware.

This information can be very helpful when doing performance tuning as you will know how long your operations should take. You can also utilize this information to calculate  potential throughput.  Continue reading

4 Key Benefits from Using Self-Service for IBM MQ – Part 2 of 3

4 Key Benefits from Using Self-Service for IBM MQ - Part 2 of 3Drivers for MQ Self-Service

In Part I, we discussed the extensive interest in MQ Self-Service.  This interest is due to a number of factors, including:  the shrinking size of middleware staff, growing workloads and increasing application complexity.

At the same time application complexity rises, the demand for MQ access grows accordingly.  The number of  application developers,  IT support and operations personnel needing access to MQ is increasing and they all come to the middleware group to get help.

There are a variety of use cases that are common within most enterprises. Understanding the typical business requirements to reduce support costs and stakeholder necessities for increased visibility, message browsing and the taking of actions is essential in providing   an effective self-service system.

Typical Requirements for MQ Self-Service  

  • Visibility Anywhere: View queue status and depth, channel usage via web
  • Testing: Examine queues, channels, queue managers, and subscriptions
  • Forensics: Browse and manipulate application messages
  • Action: Act on application specific messages (move, copy, edit, route, replay, create)

Crafting an Effective Self-Service Solution

How do you go about crafting an effective self-service solution for IBM MQ?  Many organizations use IBM’s MQ Explorer. After all, why not? It comes out of the box with the product so that is certainly an option. The product has all the characteristics that you need to manage and view in the MQ environment; however, it can be challenging to use for problem diagnosis.  Some of the requirements when using MQ Explorer do not meet the objectives that we identified:

MQ Explorer is lacking:

  • Simplicity: You need to install an Eclipse client and set the appropriate security level to give access. This may end up exposing the complexity of MQ requiring tool users to have a solid understanding of MQ or they will be lost. There will be difficulties to enable non-specialists to complete their tasks.
  • Scalability: Trying to roll out the MQ Explorer to hundreds or thousands of users is challenging for most organizations as it is a manual task.
  • Security & Audit: You end up giving people more capability that you want to give them. Users can potentially see and do more than what is needed. This can be dangerous.

The Better Approach to Self Service

First, start off with a self-service monitoring dashboard which provides stakeholders a business view of MQ:

  • Activity
  • Availability
  • Performance

Teams acquire an end-to-end view of application flows through all the moving parts that make up a workflow.

Next, provide users with real-time application visibility for instant awareness of performance problems. Standard web-enabled dashboards do not typically supply this.  Users will have the capabilities to understand what is happening within MQ as they need it to understand their situation. Problem resolution time shrinks, too.  When a problem occurs, instead of calling up the middleware team to say something like “I think MQ is broken,” the user can now describe the issue that they are experiencing and place it into a business context for rapid remediation.

Then, provide deep-dive visibility. Many users do not have this insight into how MQ impacts application performance & behavior.  This approach to MQ self-service is very empowering for the user as it enables them to better understand how middleware behaves.  Stakeholders get the opportunity to participate actively and proactively diagnose situations where issues might occur. In return, this helps the team prevent situations from reoccurring.  Once deep visibility is provided for stakeholders, productivity improves.

Finally, we come to taking action. When talking about self-service, we are not merely considering how users view objects.  We are also covering how users take action to improve the situation.  Make it simple for users to understand the necessary procedures that are available to them. Help them choose the right action to perform, through effective communication in a format that is brief, easy to understand and one that enables a quick user response.

To learn more, stay tuned for Part 3 of this 3 part series, “4 Key Benefits from using Self-Service for IBM MQ” and learn how users can take actions when provided with a graphical historical view on middleware performance. Find out what the key most important metrics are, how to interpret the metrics and when to invoke actions.

Bottlenecks & Latencies, How to Keep Your Threads Busy

Check out the infographic from DZone, “Bottlenecks & Latencies: How to Keep Your Threads Busy”.  The infographic provides a clear understanding of the difference between something you can performance tune, a bottleneck and something you have to live with, a latency. This very useful tool can help you keep focused on large number of common bottlenecks that will need your time and skills to find and fix.  This article is also available as part of a free eBook, The 2016 DZone Guide To Performance and Monitoring. And now the infographic…

DZone Performance Monitoring NastelSpotlight

Click on the image to see a larger view

What does Pokémon GO have to do with Transaction Tracking?

Pokemon-GoPokémon GO, the new game from Nintendo is hot.  It’s on the news and its being used in political campaigns from both parties – (Pikachu is on the presidential campaign trail it seems).

This innovative game avoids tethering a player to a game console and instead players get up, travel and try to find Pokémon in the real world.  Location-based augmented reality is the name they are giving to this new approach to gaming. It seems as if the developer of this game was responding to the age-old complaint that video games are unhealthy as you just sit in one place and stare at a screen. Continue reading

DevOps: Making Value Flow

DevOps: Making Value FlowAs we will have realized by now, DevOps is not a goal. It is merely a means (even better, a mindset) to achieve high-performance teams and organizations. DevOps enables cross-functional (x-silo) collaboration in and between your teams to support a continuously improving digital value chain. As a matter of fact, I rather speak of Value Flow than DevOps. But in the end, it doesn’t really matter how you call it. As long as you achieve your goal. So if it’s high-performance teams we want to achieve, which path leads us there?

I would say the path to high-performance has two axes: Value and Flow. If both value and flow are executed and adopted effectively, you will achieve high-performance. If there’s no flow in the team, but value is high, the organization will lack innovative power and feel bureaucratic. A team delivering low value, but high flow, focus on the wrong work, leading to burnout. A team without both flow and value is bound to become extinct.

View Source

Improve New Product Development with Predictive Analytics

The power of predictive analytics is multiplied when an organization takes an end-to-end process view of new product development (NPD). Idea generation and business case decision making are important. But an end-to-end view of performance in a business-process context provides additional opportunities to apply predictive analytics to improve performance in other areas, such as product development, testing, and launch.

Improve New Product Development with Predictive Analytics

The new product development process.


Analytical methods apply to each of these steps. For example, in the area of product creation, it’s possible to improve performance by classifying key attributes of past success – such as early supplier involvement, broad cross-functional collaboration, use of key metrics to move from one gate to the next, etc. – and then model the relationship between those attributes and the commercial success of the offerings.


View Source

5 big changes IT leaders need to know in the DevOps age

5 big changes IT leaders need to know in the DevOps ageWhether you are a vigilant practitioner, or a curious IT leader looking for a new approach to management, here are five things you should know about the changing role of IT leadership in the age of DevOps.

1. Outcome and results focused: End user and business

In the past, IT’s focus was on the technology, naturally. That has always been very important. But what’s changing are the vital relationships between technology and the end user and between technology and business success.

The big difference from, say, 30 years ago is that IT no longer exists simply to support basic business processes, such as back-office accounting and payroll. The greater mission now is a focus on outcomes and results for both of the key stakeholders named above, the organization and its customers (or end users). Technology becomes the enabler and avenue for which the business delivers outcomes via results to the end users.

The focus here is metrics and data-driven decision-making, two features of today’s IT leadership.

Both practicing and mastering this capability starts with partnering with your stakeholders and focusing on the end users. As an IT leader, you need to understand why you are delivering the technology solution, while building solid relationships and celebrating successes.

2. Visibility and collaboration

Visibility and collaboration refer to a team’s awareness of what is going on and their interactive ability to work together and include their end users in the process. These are by no means new concepts, but they are key elements that IT leaders must emphasize more than ever across the business. As the evolution of agile—and more lately DevOps—has now been a part of the IT space over the last 15 years, so has collaboration evolved across multiple stakeholders whose roles must be continually

integrated with the business and understood by all contributors.

The focus here is metrics and data-driven decision-making, two features of today’s IT leadership. What kind of metrics? For example, prioritized backlog depth, velocity, stories delivered versus committed, and time to market. These metrics help leaders develop a baseline to understand what happened and why, then retrospectively define how the team can improve together for the next iteration.

The challenge comes when the adoption of these practices is not widespread or not implemented according to a defined standard. That makes it difficult to have consistent communications and expectations across organizations. A lack of visibility and collaboration inhibits a business’s ability to reduce time to market.

View Source