Log4j gets added to the code “wall of shame.”

Nastel® Technologies
Comments: 0

It seems that every few weeks, we are alerted to a new significant security issue within one of the plethoras of code elements that are widely used. The same pundits discuss the same range of concerns with open-sourced code each time.


  • “It’s because people are not paid to develop this code; if only they were paid fairly, then the code would be more rigorously tested.”
  • “It’s because companies don’t put enough effort into testing their solutions before going into production.”
  • “It’s because legacy features are maintained for backward compatibility, and this leads to increased risks.”


The list of “usual suspects” is long, and I know I could add at least 20 additional “reasons” to this list without thinking about it too hard.


I’m not sure that open-sourced code is riskier than proprietary developed code.


There I said it. When you have code used by millions of developers in all kinds of scenarios, you have a form of evolutionary testing that is hard to replicate in any other way.


When a company builds code from bare metal, they have a level of control. Still, the cost and time involved are so much more significant, and the amount of testing across different use cases will always be more limited to their specific expected uses.


On the other hand, teams of students and amateurs mixed with experienced professionals doing extra work in their spare time to create something, the outcome can be quite impressive (but not always), and the flame arguments and reports of bugs from hundreds/thousands/millions of users, does expose a lot of subtle issues that can then be worked around or solved in updates.


Without the concept of open-source, the rate and pace of development would look very different. It’s incredible to see how the brainpower of a significant proportion of the human race is being used to benefit all (capitalism and socialism end up being the same thing, it seems).


There are some impressively rigorous standards continuously being developed and applied to test code and certify it for different levels of availability and security.


Now, companies are starting to also consider the importance of performance much earlier in their architecting and development cycles. Along with security and availability testing, benchmarking performance is also a powerful way of exposing cost risk to your business.


Recently we published a performance benchmarking reporting that considers different integration infrastructure (i2) messaging middleware solutions being used with varying workloads on other environments. This adds data to an area of testing that can help architects and developers choose the most appropriate method for their specific needs. You can find this paper here.

Nastel Technologies, a global leader in integration infrastructure (i2) and transaction management for mission-critical applications, helps companies achieve flawless delivery of digital services.


Nastel delivers Integration Infrastructure Management (i2M), Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate. To answer business-centric questions and provide actionable guidance for decision-makers.
The Nastel Platform delivers:

  • Integration Infrastructure Management (i2M)
  • Predictive and Proactive anomaly detection that virtually eliminates war room scenarios and improves root cause analysis
  • Self-service for DevOps and CI: CD teams to achieve their speed to market goals
  • Advanced reporting and alerting for business, IT, compliance, and security purposes
  • Decision Support (DSS) for business and IT
  • Visualization of end-to-end user experiences through the entire application stack
  • Innovative Machine Learning AI to compare real-time to the historical record and discover and remediate events before they are critical
  • Large scale, high-performance complex event processing that delivers tracing, tracking, and stitching of all forms of machine data
  • And much more


  • Dave
    December 21, 2021
    Fascinating piece of research, but what is unclear from this summary of this study is if the AI is actually more or less reliable than human analysis. 5 radiologists is quite a small sample, and the range of accuracy of fake detection is so wide that the results of the AI actually fit inside that range. Are the volumes of tests enough to be statistically viable? And of course the real issue is that compromised images could take many forms, some could be complete real images replacing the image to be tested, while others could have artifacts added or removed. So is the issue AI or the ability to secure the way an image is managed from creation to analysis. That then becomes a integration infrastructure management (i2M) problem.
  • Nastel Products Are Not Affected by Log4j Vulnerability Issues
    December 16, 2021
    […] Read more about Nastel’s latest Press Release here. […]
  • Why are so many BMC customers looking to replace their Middleware Management (“BMM”) solution?
    December 7, 2021
    […] Watch OnDemand today to learn more about the Nastel solution for Integration Infrastructure Management (i2M) and discover how Nastel has successfully replaced BMC’s Middleware Management solutions in many midsize and large enterprises around the world. […]
Write a comment
Leave a Reply
Your email address will not be published. Required fields are marked *
Comment * This field is required!
First name * This field is required!
Email * Please, enter valid email address!
Register to Download


Schedule a Meeting to Learn More

Become an Expert

Schedule a Demo