Building management systems (BMS) have become integral to modern data center operations, yet many operators fail to unlock their full potential. While these systems are primarily viewed as tools for reliability and uptime – albeit the cornerstones of any data center – focusing too much on aspects like failure rate means overlooking a BMS’s capabilities for operational optimization.
Mishandling or underutilizing a BMS in this way results in missed opportunities to reduce energy costs, extend equipment lifespan, and, ultimately, enhance overall effectiveness.
There are several common blind spots in how facilities deploy and manage a BMS, regardless of the size, tier, or function of a data center or its type of BMS system.
Typically, these blind spots fall into two categories: reliability and validity. By addressing these issues, operators can realize significant improvements in reliability and efficiency, grasping the transformative potential of their BMS instead of leaving money on the table.
Validity: Trusting the data you see
Data centers are built on a foundation of accurate information. Yet, operators often take the data a BMS presents at face value. This is a risky assumption. Without correct data, even the most advanced systems can make poor decisions that jeopardize operations, which is increasingly important as data centers continue integrating AI systems to collaborate with their BMS.
For instance, inaccurate readings from electrical power monitoring system (EPMS) points can mislead AI algorithms into unnecessary load-switching decisions, potentially overloading equipment or causing downtime. Here are some other areas where calibrating and cross-checking of data points can mitigate risk:
Actuator discrepancies
Disagreements in the status or position of BMS positions or setpoints can lead to excessive energy consumption. A common example involves air handlers. If the BMS incorrectly shows an air handler as 90 percent open in economizer mode when it is only at 10 percent (due to a reversed analog output signal), this can result in unnecessary chilled water usage to maintain desired temperatures.
Static pressure sensor calibration
Static pressure sensors play a crucial role in maintaining proper airflow and temperature control. However, if these sensors are out of calibration, it can lead to increased energy consumption and challenges in meeting setpoint temperatures.
Chilled water system PID loop tuning
Poorly tuned proportional-integral-derivative (PID) loops in chilled water systems can cause oscillation in pumps, leading to premature motor failures and excessive wear on equipment. Furthermore, these oscillations can affect chillers and downstream systems, compromising reliability.
Reliability: Trusting your system will always work
Data centers demand unwavering reliability. Yet, many operators do not adequately assess the reliability of their BMS itself. While the above discussion of validity focused on accurate data and proper operation, reliability – ensuring that the BMS is stable and resilient – is also vital. Here are some key considerations for enhancing BMS reliability:
Fail-safe logic
Components under control should be reviewed and tested to ensure that if they fail, the system as a whole won’t become dangerous or cause further damage. It’s a concept called ‘Fail Operationally Safe,’ and it should definitely be applied to contacts, dampers, valves, setpoints, and more.
Sensor dependability
Sensors are the lifeblood of a BMS, providing the data necessary to monitor conditions and adjust systems. However, sensors that fail or drift out of calibration can result in inaccurate readings and improper responses.
Network infrastructure quality
The network infrastructure supporting a BMS is as critical as the sensors and controllers it connects. Poorly maintained or outdated network components can introduce latency, packet loss, or outright failures.
Bridging the gaps
As we’ve seen, to maximize the benefits of a BMS, data center operators must shift their focus beyond simply avoiding downtime. What are some actionable steps toward addressing these blind spots and optimizing operations?
First and foremost, regular audits should be conducted, which involve periodic validation of data points, preventative maintenance, recalibration of sensors, and testing failover systems to ensure the system operates as intended.
Operators should also leverage analytics using historical data and advanced techniques to identify inefficiencies and refine system performance.
Investing in training is also crucial, as equipping operations teams with the knowledge to interpret BMS data accurately allows them to make informed adjustments. Finally, enhancing communication between IT, facilities, and operations teams fosters collaboration, aligns goals, and ensures the BMS is utilized to its full potential.
With this approach, using a BMS becomes more than just a safeguard against outages – it becomes a powerful tool for driving operational efficiency and sustainability. By adopting best practices that address validity and reliability blind spots, data center operators can unlock the full potential of their BMS, thus conserving power, cutting expenses, and increasing the durability of vital infrastructure.
Learn more about Ascent here.