Roentgen Works - White Paper: Next Generation Technology
BRIT Systems White Paper
Next Generation Tools for Building Radiology IT Systems
By Robbie Barton, Chief Software Architect, BRIT Systems.
When BRIT Systems was formed in 1992 as a teleradiology, PACS and RIS provider, companies such as IBM, EMC and Oracle drove information technology innovation. Today, however, the Internet revolution has changed the pace and focus of innovation. Google, Amazon and Facebook handle large workloads and volume (millions of users) not with conventional, mainframe computing, but rather with commodity hardware and open standards – a different approach that utilizes virtualization, dynamic scaling and cloud computing.
In many ways, radiology image management is no different; image file sizes have grown exponentially with the emergence of high-field MR, multi-detector CT and digital mammography: visible light – endoscopy and digital pathology – are on the horizon. Radiologists are no longer centralized in a reading room down the hall from the exam rooms; rather, remote reading and decentralized “departments” are growing in number and require capabilities to quickly view and deliver images electronically, and communicate with referring clinicians.
Recently, companies such as Google, Facebook and Amazon have published their SOAP and AJAX APIs and made their services available to other companies. BRIT realized early on that these APIs along with an Internet approach to computing opens up new possibilities for radiology image management. Today, we are implementing browser-based technologies in our next generation Roentgen Works family of PACS, RIS and Teleradiology solutions.
A new approach to information technology
High availability of data is critical to organizations that do commerce on the Internet, i.e., Amazon, have high user volumes, such as Facebook, or are one of the dominate search engines, i.e., Google. Many of the existing PACS use clustering to ensure high availability of data. Clustering refers to two (or more) systems that are equal and communicate to share the workload. One is active, the other on standby; if one system fails, the workload shifts over to the second system. Unfortunately, this approach requires healthcare facilities to purchase twice the hardware and maintain the software on two different systems.
A newer approach to clustering is the use of virtualization, dynamic scaling and cloud computing. Virtualization refers to separating the operating system from the resource. Each computing function does not require dedicated hardware, rather, by running multiple copies of the OS, computing can move to a resource that has more power, capacity, etc. With dynamic scaling, all computer nodes are active, and additional nodes can be added to the cluster for more capacity. Cloud computing is a “pay-per-use” concept that uses virtualization to tap into powerful servers, often located within highly secure data centers, for advanced computations or other complex programming.
Likewise, storage technologies have evolved. Many healthcare facilities discover they reach their storage capacity faster than planned, resulting in the need to purchase/install additional hardware. This approach, which typically involves the use of storage area networks (SAN) and RAID storage technologies, requires a large upfront, capital expense. For disaster recovery, mirroring has been a standard implementation within healthcare, resulting in twice the hardware expense.
Grid replication offers a more cost-effective option. Google, for example, uses commodity hardware and grid replication to keep multiple copies of data (i.e., search results). If one fails, another takes over. And, as with computing virtualization, organizations like Amazon provide pay-per-use storage that is geographically disperse.
Finally, database technologies have also recently changed. A large, central database becomes slower as more data is added, and it is expensive to expand. Replicating the data can be costly, and if any data becomes corrupt, it is replicated corruptly. Partitioning at the database level has also been used; however, the disadvantage is that all partitions must be available to access the data.
The new approach is toward the use of embedded databases that run on every node. Take for example a cluster of three systems; each node has its own database, so access is not dependent upon a centralized database. By using cluster replication, every node in the cluster can provide definitive answers or information to the client, with the same results from each node.
A modification of this approach is sharding, wherein not all nodes have all the data. Different than partitioning, sharding occurs at a logical point in the data through pre-defined algorithms. For example, the sharding could be based on user ID or type of study. The bottom line is that each computer has a database subset that it manages; spreading the data out in this manner leads to better system performance.
The way we transmit and share data is also changing. Healthcare uses HL7 and DICOM to transmit and share data and images. While these standards have become ubiquitous to radiology, new APIs may fare better as communication tools.
In 2006, Google published its SOAP Search API. What is important with the SOAP Search API is that it enables computer applications to also query search engines and extract data, just as people do with Google. For example, a clinician can send a SOAP request to the server to gather patient data, such as data of birth, exam data, etc., and then populate that information onto the order form, report, etc.
Historically, clinicians used dedicated workstations, referred to as a thick client, with application software loaded onto the hard drive. This approach presented several issues for efficient radiology reading, namely, the workstation had to be powerful enough to handle large studies and advanced 3D post processing or intelligent programs (such as CAD) were not part of the standard radiology reading workstation. Rather these software applications were stored on dedicated workstations, so a radiologist who wanted to conduct image analysis using CAD or 3D would have to “move” data (and him/herself) to this workstation. And, for the IT department, maintaining and updating the software of each thick client across a healthcare facility was a time consuming and costly process.
In the early 2000s, thin clients with server-based processing were the next technology wave, replacing dedicated workstations. By loading the 3D software, for example, on a high powered server, users could access the application as needed, from any connected workstation. However, the thin client had to run the same specific release of the OS as the server.
Today, web clients are changing the way clinicians work, and the way technology impacts their workflow. With these workstations, all that is needed to facilitate viewing data and accessing applications is a Web browser. Processing is done at the server, so there is no loading of software or an OS onto the Web client, therefore, it has a lower maintenance cost for IT – they simply just upgrade the server.
Much as changed from the wired world that spawned the emergence of data sharing. Historically, low bandwidths of network cables prevented the “anytime, anywhere” dissemination of data. Local area networks (LANs), routers and gateways were required to send and retrieve information, and geography as well as processing power limited what, when and where data could be shared.
The emergence of high-speed wide area networks (WANs) opened new possibilities for sharing data. Healthcare facilities installed T10, T100 and T1000 networks while millions of people relied on broadband – DSL or cable – to become interconnected from home. While this approach is still widely used, wireless technologies are gaining greater acceptance and use for data transmission.
Today, mobile devices such as laptops, smart phones and iPads allow users to view images, data and waveforms with download speeds that seem lightening fast. Consider that today, many new cell phones can access data from a server faster than a five-year-old wired computer. The emergence of WiFi and perhaps more importantly, 3G and 4G wireless networks, show great potential for increasing bandwidth and speed. While the use of wireless networks remains limited in healthcare, continued advancements in this technology will likely lead to an increase in use.
Yet, advancements in networking isn’t just about wired or wireless. Load balancing is a technique where some processing resources exist via the Internet in a data center, while other are located on the LAN or WAN. The amount of data that travels across the Internet from a LAN to a data center can be much smaller using this approach, and it reduces the need for high bandwidths between the site and the Internet.
The emergence of virtual private networks (VPNs) further enables fast, secure and reliable access to data from remote locations. A VPN uses virtual connections that are routed through the Internet to a data center or other end user. Privacy of the data is assured through security procedures and “tunneling protocols” that encrypt data and prevent unauthorized users or data from entering that “tunnel.”
The impact on radiology
So what does this all mean to users of RIS, PACS and Teleradiology? First, deployment strategies should see a dramatic decrease in the time and cost required to “go live,” due to the use of commodity hardware and open standards. With Web clients, the time and cost to install software is reduced exponentially, as one server can feed the applications and data to many clients.
The use of cloud computing and dynamic scaling can enable a facility to buy storage/archive devices on an “as-needed” basis, rather than trying to predict those needs in five years (which results in buying additional hardware that is not needed at the time of implementation and is obsolete by the time it is needed). In particular, cloud computing enables the “pay-as-you-go” approach, which provides a facility with the added benefits of lower upfront costs, no long term commitment and the ability to expense the cost of storage rather than amortizing an asset over its useful life cycle. Each of these computing advancements also translates to lower administrative and management costs for IT departments.
Plus, the simplification of hardware and software can impact the RFP process. The entire RFP/bidding process can take as long as 12 months; unfortunately, with the speed and breadth of new IT developments, items in the RFP can become obsolete – or yesterday’s technology – before implementation.
Virtualization and Web clients are driving remote reading capabilities and anytime, anywhere access to data and images. The future possibilities of emerging wireless technologies will only further propel the dissemination of patient information throughout the enterprise and beyond.
The use of next-generation, browser-based technologies and a new approach to computing is not just a vision for radiology. It is reality that BRIT Systems has brought to fruition with the Roentgen Works family of radiology IT products.
Copyright 2010 BRIT Systems