Big Data Analytics Could Reduce Power Grid Outages
Big Data – The power grid is one of those things that most of us take for granted, but it’s time to acknowledge that it’s vulnerable to power outages due to age, variability of distributed renewable generation resources and attacks. The annual cost of short power interruptions (i.e., five minutes or less) in the U.S. is $60 billion, and in Canada, momentary outages (one minute or less) cost $8 billion annually, while sustained outages cost $4 billion.
To help avoid such outages, the National Energy Technology Laboratory (NETL) of the Department of Energy (DOE) announced the award of nearly $7 million to explore the use of big data, artificial intelligence, and machine learning technology and tools to derive more value from the vast amounts of sensor data already being gathered and used to monitor the health of the grid and support system operations. A Texas A&M University team led by Dr. Mladen Kezunovic, director of the Texas A&M Engineering Experiment Station’s Smart Grid Center, received a $1 million NETL grant to use Big Data Analytics (BDA) to automate monitoring of synchrophasor recordings.
The DOE projects are expected to inform and shape the future development and application of faster grid analytics and modeling, better grid asset management and sub-second automatic control actions that will help system operators avoid grid outages, improve operations and reduce costs.
Kezunovic, Regents Professor and the Eugene E. Webb Professor in the Department of Electrical and Computer Engineering, will lead the project “Big Data Synchrophasor Monitoring and Analytics for Resiliency Tracking (BDSMART).”
The project will use BDA to automate the monitoring of synchrophasor recordings, which will improve assessing events that may affect power system resilience. The proposed BDA will be used to automatically extract knowledge leading to event analysis, classification and prediction, all used at different stages of the grid resilience assessment: operations, operations planning and planning.
This article originally appeared on today.tamu.edu To read the full article, click here.
Nastel Technologies uses machine learning to detect anomalies, behavior and sentiment, accelerate decisions, satisfy customers, innovate continuously. To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s AutoPilot® for Analytics fuses:
- Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
- Raw information handling and analytics speed
- End-to-end business transaction tracking that spans technologies, tiers, and organizations
- Intuitive, easy-to-use data visualizations and dashboards
If you would like to learn more, click here.
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics