tag:blogger.com,1999:blog-6450802956409058482024-03-13T09:50:40.755-04:00High Performance ComputingUnknownnoreply@blogger.comBlogger17125tag:blogger.com,1999:blog-645080295640905848.post-55737378103966986042008-11-20T14:45:00.003-05:002008-11-20T15:52:58.196-05:00Nvidia Cuda- Kudos or Complaints?In a <a href="http://highpercom.blogspot.com/2008/10/hpc-in-box.html">previous article</a> we began speculation about the impact that the new <a href="http://www.nvidia.com/object/tesla_computing_solutions.html">Nvidia Tesla</a> boards might have on the proliferation of HPC in the manufacturing space, and this started discussion about the difficulties of <a href="http://www.gpgpu.org/developer/">programming GPUs</a>. This week I had the chance to "sit down"( we were standing and walking in the <a href="http://www.nvidia.com/page/home.html">Nvidia</a> booth, actually) with Sumit Gupta of Nvidia to talk a bit about <a href="http://www.nvidia.com/object/cuda_get.html">Cuda programming</a>.Cuda is a library that adds keywords to C to express parallelism, making it a potentially more intuitive model for parallel programming. "It will not make parallel programming easy," claimed Gupta, "but it does make it easier". Like many other vendors selling parallel programming tools and languages, Nvidia has their stories about high school kids successfully using them as proof that they are easy. I am not buying that as a valid measure. I know high school students who can do things even I can't with a computer. I say give the tools to an old school C programmer on the plant floor. If he can use them, THEN they are easy ;-)<br /><br />It is important to note up front that the Cuda/Tesla partnership is most suited to problems that are <a href="http://en.wikipedia.org/wiki/Embarrassingly_parallel">embarrassingly parallel</a> in nature and can be easily spawned over 30,000 threads. The speed gain by this pair is in hiding memory latency;if you have too few threads, the memory latency remains and speed gains are not realized. This means that if your threads will be sharing data at all, you will end up with traffic collisions and will not see any gain. <br /><br />If your problem, like Helen of Troy, can launch a thousand threads, then the advantage of Tesla is that it was built to handle the thread management for you. As Gupta said, "launch as many threads as you can and let the hardware handle it". There is a very active user forum at the <a href="http://www.nvidia.com/object/cuda_get.html">CudaZone</a> where people are sharing code, examples and advice. The forums are populated not only by users, but also by Nvidia employees who help users solve problems and learn the programming environment. <br /><br />Although some of you are ubergeeks who enjoy hacking code and will probably show up on the Nvidia forums, for most plant floor folks, this new technology is not really useful until it shows up in COTS software. At the show this week, <a href="http://ansys.com/">Ansys</a> was demonstrating a mechanical library that had been ported to CUDA and <a href="http://www.pgroup.com/">The Portland Group</a> announced that the latest version of their PGI tool would include <a href="http://www.pgroup.com/support/new_rel.htm">"Provisional support for x64+GPU on 64-bit Linux for CUDA-enabled NVIDIA GPUs using the new high-level PGI Accelerator Compilers programming model"</a>.<br /><br />Like all other IT solutions, this is not a silver bullet, and the ecosystem of software support is still young, but it is still an area to watch closely- it may even have an application in your plant today.Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com2tag:blogger.com,1999:blog-645080295640905848.post-41464715916291371202008-11-19T16:08:00.003-05:002008-11-19T17:13:09.237-05:00Good news for Business in New MexicoIn contemplating a move to using HPC to speed up processes in their plants, many businesses are staring at an "expert knowledge gap" that can make them feel like they are trapped at the bottom of a canyon. <a href="http://www.newmexico.gov/">New Mexico, the state famous for its canyons</a>, has recently launched a collaborative center that can give businesses a hand out of the gap and into profitability. The <a href="http://newmexicosupercomputer.com/">New Mexico Computing Applications Center(NMCAC)</a>was launched on September 19, 2008 and I had a chance this afternoon to sit down with Dr. Thomas Bowles, Chairman of the Board of Directors for the center & Science Advisor to Governor Bill Richardson of New Mexico, and Dr. Lorie Liebrock, the education director at the center to learn more about the services they are offering to manufacturers and other businesses. <br /><br />The NMCAC is a nonprofit organization which owns its own hardware, Encanto, the 7th fastest computer in the world according to the <a href="http://www.top500.org/">Top 500 list</a> in June 2008. They provide access to the experts at the 5 founding organizations:<a href="http://www.unm.edu/">University of New Mexico</a>,<a href="http://www.nmsu.edu/">New Mexico State University</a>,<a href="http://www.nmt.edu/">New Mexico Institute of Mining & Technology</a>,<a href="http://www.lanl.gov/">Los Alamos National Laboratory</a>, <a href="http://www.sandia.gov/">Sandia National Laboratories</a> and to their recently announced partner, <a href="http://darkstrand.com/">Darkstrand. </a> With this combination of resources, the NMCAC sees itself as an HPC Integrator. Although Dr. Bowles credits the <a href="http://highpercom.blogspot.com/2007/11/blue-collar-computing-overcoming-high.html">Blue Collar Computing Initiative</a> at <a href="http://www.osc.edu/">OSC</a> as a model for the center at its conception 2 years ago, the services are slightly different. NMCAC is giving primary focus to its integrator role, rather than providing HPC SaaS. Just as you might hire an integrator to assist with evaluation, planning and implementation of a new <a href="http://www.mesa.org/index.php?page=mesa-model">MOM system on the plant floor</a>; you now have access to an HPC integrator who can assist with up front analysis, planning and implementation of your HPC projects. <br /><br />Since the NMCAC is partially funded by the State of New Mexico, they give priority to businesses who either directly or indirectly can show positive impact to economic development in the state. If you are a business or a start up located in New Mexico, they are also working with the local Economic Development Boards to assist with funding and maybe get your first consult with no out of pocket expense. <br /><br />What advantage does an HPC integrator bring to a business? If you are contemplating moving legacy systems to an HPC platform, they can assist with the upfront consult to evaluate if this has ROI. If you are developing a new algorithm, or are investigating recently developed algorithms for new processes, you can have access to some of the best computing brains in New Mexico to assist. If you have a one time need for modeling or simulation to take a product from late stage R&D to production, they may already have the software and training needed to complete this, rather than your company making that infrastructure and personnel investment. <br /><br />What remains unknown at this early stage is how affordable the pricing is for out of state businesses, and what the turn around time for problem solving will be. We are certainly excited about the possibilities that this new model presents and are hoping to see other "HPC Integrators" popping up in the ecosystem soon. We will be following their growth and experiences, but if you have a chance to work with the NMCAC, be sure to leave a comment below and share your experiences with us.Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com1tag:blogger.com,1999:blog-645080295640905848.post-23255488798272195712008-11-19T10:05:00.004-05:002008-11-19T11:52:01.900-05:00Shape the future of Software servicesWe have previously touched on the idea of <a href="http://highpercom.blogspot.com/2007/11/blue-collar-computing-overcoming-high.html">purchasing cycles of compute time</a> from a service provider as a way for small or mid size businesses to do <a href="http://highpercom.blogspot.com/2008/11/hpc-brokerage-services-one-cloud-that.html">simulations or data analysis without investing in infrastructure</a>. This is also a useful model for large companies who need a way to manage peak flow, or even as a "try before you buy" model to prove out ROI. There is a review article in the works comparing/contrasting the different service providers to help you better make decisions. Almost all of these vendors provide raw compute cycles, you have to provide the software and the expert domain knowledge. Yesterday here at SC08 I had the chance to meet and talk with a company who is taking a slightly different approach. <a href="http://www.cyclecomputing.com/">Cycle Computing </a>is not just selling raw compute cycles, they are selling HPC software runs as a service. They work with ISVs, purchase appropriate software licenses and make sure the software runs on the hardware. All you have to do is supply data through the secured pipe and receive results. Application interfaces can current be <a href="http://en.wikipedia.org/wiki/Secure_Shell">SSH</a>, a <a href="http://en.wikipedia.org/wiki/Representational_State_Transfer">RESTful web interface</a>, or even a <a href="http://en.wikipedia.org/wiki/Virtual_private_network">VPN </a>into your business network. <br /><br />They have been successful with this model in the financial and pharmaceutical verticals and are looking to expand into other areas of manufacturing. What are the implications of this for your manufacturing businesses? HPC results without the HPC headaches. No hardware to purchase, install, configure and administer. No software license purchases, management or renewals. No long term contracts for software services you only use twice a year-- pay only for the compute cycles you actually use. What will it take to make this a winner? If the services,speed and price are right, this could be a huge win for many businesses. And at this point in time, you have the ability to help shape the service into something that works for you. Cycle computing is looking for feedback on what software packages you would make the best use of. I say this is your time to give them feedback on all points of the service. What is your price point requirement? (I do not think that Free is an option at this point....) What are the technical requirements/security considerations that would make or break this as you try to sell it to your management? You can comment below and let them know what your dream HPC software service would look like.Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com1tag:blogger.com,1999:blog-645080295640905848.post-54232708972810721922008-11-18T09:39:00.006-05:002008-11-18T11:51:56.811-05:00Hidden Gems- bread crumbs to a vision?Every day in almost every plant in the US, Dell provides great value. There are low cost, high efficiency Dell computers assisting us in running plant floor systems better spread all over the world. I was then, excited to hear that Michael Dell, CEO of Dell computers was giving the opening keynote at SC08 here in Austin. Perhaps that is also the reason for the deepness of my disappointment in the talk (you can see my stream of consciousness thoughts on the keynote as it was going by using this <a href="http://search.twitter.com/search.atom?q=sc08+MD+roguepuppet">Twitter RSS feed</a>).<br />Don't get me wrong, it was entertaining. The art department at Dell creates gorgeous slides. More importantly, buried in amongst the 40 minute Dell commercial, there were some hidden gems for the folks in manufacturing to contemplate. While I do not think that these qualify a a true vision, they are perhaps breadcrumbs on the path to having a vision for HPC. Consider the following points:<br /><br /><li>Dell is predicting that by 2010 processors will contain 80+ cores. If you think that the software pricing model for your plant floor software and back end databases is a budget busting nightmare with dual or quad core processors now, imagine what will happen in a few years when the smallest processors have 20 or so cores on them and high end processors are 80 cores. Now is the time for manufacturers as a group to start working with vendors to get the software pricing problem fixed. <br /><br /><li>Michael Dell admitted that the recent "core war" started recently and escalated fast, but he was very firm in the idea that it was not going to end soon. Software is the big gap in all of this. If super multi-core machines are going to be available on the plant floor soon, which vendors are poising themselves to take advantage of that power? Even though tasks like scheduling could be written to take advantage of parallel computing power, I do not know of any out of the box programs that do. I certainly can not know everything, so please comment if you know of some. Yes, there are companies out there (FedEx comes to mind first) who are writing custom algorithms and software, but that requires a huge investment in time, talent and money. I will point out to the plant floor software vendors amongst us Michael Dell's thoughts "..<span style="font-style:italic;">there is a need for petascale software to take advantage of all of this computing hardware .....if you can be the first to figure out a way to use all this hardware power, there is a lot of financial advantage to be made"</span>. <br /><br /><li>It was pointed out how deeply prices for compute power have dropped in the last 5 years. In 2003, 2 teraflops of compute power cost roughly a million dollars. That put 2 teraflop questions firmly out of reach for most manufacturers. The benefit gained was not worth adding a million dollars to production costs. Today, you can get 25 teraflops for a million dollars. In the current economy, you are even less likely to want to add a million dollars to production costs- but that scale means that for at most 80 thousand dollars, you can get 2 teraflops of compute power. <br />What 2 teraflop questions/problems are you not tackling because you are still thinking in terms of 2003 pricing? What simulations or real time data analysis could be giving you a competitive edge, that you have not even considered for fear of sticker shock??<br /><br />I am off to try to set up some one on one time with Dell's manufacturing outreach folks. If you have specific questions for Dell related to their use in manufacturing, comment them below so I can get responses for you. Perhaps they will be able to give me more interesting insights in the vision at Dell, since Mike missed the mark.Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com2tag:blogger.com,1999:blog-645080295640905848.post-43289547104445092342008-11-18T08:50:00.002-05:002008-11-18T08:58:32.656-05:00Michael Dell: SC08 KeynoteAlthough I will post a summary, notes and thoughts of this morning's <a href="http://www.dell.com/sc08">SC08 Keynote, given by Michael Dell( CEO of Dell Computers)</a> here later this afternoon, I thought some of you might want to watch along. The advantage of a high tech conference with speakers with lots of technology resources is that they can set up streaming video when they talk.<br />if you are interested, you can watch the keynote at http://www.dell.com/sc08. Part of his talk will be a review of HPC technology and developments over the last 40 years, but then he will also be talking about the future vision. Dell's current big push is how to simplify HPC, and they have active outreach for the manufacturing sector(more on that after my interviews later this week).<br />I am off to coffee up and prepare for the keynote, enjoy!Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com0tag:blogger.com,1999:blog-645080295640905848.post-23761552355595798902008-11-16T13:37:00.002-05:002008-11-16T13:42:11.242-05:00HPC Brokerage services: one cloud that won't rain on your paradeBob Graybill has started Nimbus, a company that is designed as a brokerage house for HPC services and cycles. He discusses the <a href="http://www.hpcwire.com/industry/manufacturing/Bob_Graybill_Starts_National_Clearinghouse_Firm_for_HPC_Services_31132789.html">company's HPC service offerings and rationale </a>for this new company with HPCWire:<br /><span style="font-style:italic;"><br />To be a business-to-business brokerage or clearinghouse. The idea is to provide pre-negotiated access to cycles, software and expertise on an on-demand, pay-as-you-go basis. We won't own any equipment or do consulting ourselves. We're simply a clearinghouse that builds a menu of quality services and then brings the buyers and the sellers of those services together. Our targets are periodic and experimental users, initially in the manufacturing sector. These are people who don't want to jump over huge hurdles to get the benefits of modeling and simulation using HPC. We're an aggregator of services. We also help our partners, our service providers, by reaching out to a brand new community on their behalf. </span>Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com0tag:blogger.com,1999:blog-645080295640905848.post-16104216989719565162008-10-16T11:10:00.002-04:002008-10-16T11:15:01.268-04:00Your SC08 ProxySince corporate travel is becoming very tight in the current economic climate, This InTech Blogger will be attending the <a href="http://sc08.supercomputing.org/index.php">SC08 Super Computing Conference</a> in Austin as your proxy from Nov 17-21. Check out the <a href="http://sc08.supercomputing.org/?pg=techprogram.html">program</a> and <a href="http://sc08.supercomputing.org/?pg=exhibits.html">exhibit</a> listings and comment here if you have specific requests for more information, reports from the field or are just generally curious. Be watching here during the conference for daily blog updates about news, announcements and general de-bunking. If you are also going to be attending the conference, be sure to comment here or else email me directly so that we can be sure to cross paths in person.Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com1tag:blogger.com,1999:blog-645080295640905848.post-77713485067455013382008-10-10T10:30:00.002-04:002008-10-10T10:38:10.259-04:00HPC in a boxHas your business been avoiding HPC because you lack the expert knowledge to install and maintain clusters? Have you been itching to run simulations and gain some competitive edge, but the world of job schedulers, mpi and other "standard" HPC methods was a new world of pain and confusion? A new solution may be on your horizon. this week <a href="http://www.jrti.com/products/velocitymicro_index.html">Velocity Micro announced a new line of Visual Supercomputer workstations</a>. With the power of a small supercomputer in a single box, preloaded with NVIDIA hardware and software, this is a turnkey solution without clustering, grids or other complex HPC elements. With pricing that looks to range from 1500$ to 6000$ and compute power that ranges up above 3 teraflops, this little boxes will suit many small to mid size business needs without breaking the bank. I am thinking of putting one on my Christmas list, so if you get one and have a review or comments, please speak up here.Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com3tag:blogger.com,1999:blog-645080295640905848.post-53320733898460777622008-10-10T10:20:00.003-04:002008-10-10T10:25:25.302-04:00Real Life Simulation Pitfalls in the data centerOne of the current trends in simulation software is to use CFD fluid dynamics models to model heat dynamics in computer data centers. In the era of pushing for cost savings in every corner, being able to lower the cooling bill for your computer room is an easy sell to management. But be careful that the simulations that you use are based on actual measurements taken by someone crawling around the data center and not just averages built in to the program, or you will likely not be able to deliver on the savings you promised management. Kenneth Bill at Forbes.com reminds us that simulations must be grounded with real life measurements <a href="http://www.forbes.com/technology/2008/10/07/cio-cooling-software-tech-cio-cx_kb_1008software.html">in this good article.</a>Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com0tag:blogger.com,1999:blog-645080295640905848.post-62468822242013508922008-09-03T09:05:00.002-04:002008-09-03T09:10:06.626-04:00Why do you use parallel programming- or not?There are lots of reasons people choose to use parallel programming- not all of them wise. <a href="http://www.hpcwire.com/features/Compilers_and_More_Parallel_Programming_Made_Easy.html">This article by Michael Wolfe</a> gives excellent insight into the current status of parallel algorithms and a call for better education of programmers in the field. What criteria do you use when deciding if a problem will benefit from the use of parallel programming? How do you pick the staff who will solve the problems?Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com0tag:blogger.com,1999:blog-645080295640905848.post-56866703021556497892008-09-03T06:33:00.004-04:002008-09-03T06:36:07.624-04:00Simulation Life Cycle ManagementAt the heart and soul <a href="http://www.industryweek.com/ReadArticle.aspx?ArticleID=17159">of this excellent article</a> is the following quote:<br /><br />"For simulation to be truly effective as an integral part of the product development cycle, the processes, authoring tools, data, and resulting intellectual property associated with simulation must be shared, managed and secured as strategic business assets."<br /><br />As simulation practices grow, businesses need to put in place methods and tools which will allow them to validate and re-create simulation results. How does your business manage simulation methods and data? What gaps are you struggling to fill? What hurdles are you finding the hardest to overcome?Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com0tag:blogger.com,1999:blog-645080295640905848.post-63799076371840309632008-08-25T13:09:00.002-04:002008-08-25T13:12:05.132-04:00Digital versus Physical Modeling<a href="http://www.pddnet.com/scripts/ShowPR.asp?RID=23114&CommonCount=0">This article in PD&D </a>does an honest assessment of the use of software for digital modeling- its advantages and its limitations. What it misses is the fact that you CAN have systems that allow you to do full VR immersion and handle the parts.. without too much crazy investment. Truth is, nothing is ever going to replace that final physical prototype build- but modeling can eliminate many iterations of physical builds. If you had one of those installed, would you really stop building parts to see interactions, or would this save you one more round of prototype building?Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com0tag:blogger.com,1999:blog-645080295640905848.post-68746843507564889782008-08-22T11:17:00.003-04:002008-08-22T11:22:52.228-04:00Microsoft HPC manufacturing verticalMicrosoft, who has an HPC platform offering, has a <a href="http://www.microsoft.com/hpc/manufacturing.mspx">vertical focused on manufacturing-</a> with special emphasis on automotive, Oil&Gas and Aerospace. What I cant tell is if they are focusing on those areas because that is where their initial contacts already were, or if they really believe those are the best targets. Since HPC is strong in many other areas of manufacturing ( Proctor and Gamble immediately springs to mind), I hope they are not missing the boat. Let's hope they just need to grow in this area and are open to supporting lots of new areas of manufacturing. If you are a manufacturer in an area outside of their current focus, contact them and let us know how they respond.Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com0tag:blogger.com,1999:blog-645080295640905848.post-61788989571817913092008-08-22T11:10:00.002-04:002008-08-22T11:13:47.638-04:00Sun focuses some attention on the Manufacturing VerticalSun, who is pushing the OpenSolaris distribution for use in High Performance Computing Applications, has announced a <a href="http://opensolaris.org/os/community/hpcdev/manufacturing/">Manufacturing Vertical Focus,</a> especially for folks involved in EDA or MCAE. If you are an EDA or MCAE person, let us know what you think of their approach.Anonymoushttp://www.blogger.com/profile/17825837451589662852noreply@blogger.com0tag:blogger.com,1999:blog-645080295640905848.post-65162605614794483552008-01-24T14:42:00.001-05:002008-02-23T14:18:50.517-05:00Too simple to simulate?One of the biggest current uses for <a href="http://en.wikipedia.org/wiki/High_performance_computing">high performance computing (HPC)</a> in manufacturing is for <a href="http://en.wikipedia.org/wiki/Simulation">simulation</a>.<br />Simulation allows you to experiment with potential process or design factors without the expense of prototyping. The problem with simulations is even with a fairly powerful single CPU server, they can take days or longer to run. With the move to simulations on parallel CPU computers, simulations can be run and ready for analysis in hours rather than days.<br /><a href="http://www.er.doe.gov/about/Office_of_the_Director-Bio.htm">Dr. Roy Orbach, Undersecretary of State for Science,</a> calls simulation the third leg of innovation. During an invited talk at SC07 in November, he cited the examples of Boeing, Pratt Whitney and Proctor &Gamble as manufacturers who have very successfully incorporated simulations into their manufacturing processes to give them a competitive edge. With the use of simulation of manufacturing process, these companies have reduced time to market and lowered the cost of first prototype.<br />You are thinking to yourself "that is all well and good if you are manufacturing jumbo jets, race cars or semiconductors, but all I make are little plastic widgets. My products are way too simple or inconsequential to require a simulation". Instead, ponder the expenses invested (and potentially lost) in improvements if you do not do simulations first.<br />Trial and error changes mean production and time costs for trials that fail; cardboard cutout or prototype changes have increased costs for the building and testing of the physical prototype, as well as shop time and expense. In many cases, a simulation can be run on an x86 linux cluster for very little cost other than the investment in support and knowledge of simulation process. These are upfront investment costs amortized over many simulation runs versus the sunk costs that are lost when doing trial and error or prototyped improvements. This low cost clustered hardware solution has been used and documented since the late 1990s, and has become mainstream, with even the Kuwait Oil Company recently moving to a sun/linux clustered solution for simulations.<br />With that knowledge, <a href="http://www.sc.doe.gov/ascr/INCITE/index.html">the INCITE program</a> has also been offering government facilities for manufacturers wanting to try out simulation on HPC infrastructure, or looking to test viability before investing in their own infrastructure. Simulation is seen as a manufacturer's single biggest competitive edge by the council on competitiveness, it is labeled as an "innovation accelerator."<br /><a href="http://www.compete.org/about-us/hpc/contact/">Suzie Tichenor</a> of the<a href="http://www.compete.org/"> Council on Competiveness </a>calls modeling and simulation "the key to building an innovation culture". Being able to simulate changes before implementing them can make it faster and cheaper to make the right decisions on process or product improvements‑providing true innovations on the plant floor. As plants push for extreme lean JIT manufacturing, every small performance tweak along the manufacturing line can have a big impact on the bottom line.<br />Are you using simulation at your manufacturing facility? If no, what are your limiting factors or barriers? If yes, how are you using it today and where do you see this growing?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-645080295640905848.post-22177249100035372382007-11-20T09:07:00.001-05:002008-02-23T14:14:49.186-05:00Blue collar computing: Overcoming high performance computing barriersThe two major barriers for companies wanting to use <a href="http://en.wikipedia.org/wiki/High_performance_computing">High Performance Computing (HPC)</a> to solve complex problems are the cost of implementation and the difficulty of installing, maintaining, programming and using HPC systems. <br />While there is no question that multiprocessor parallel programming is very difficult (and getting more difficult with each leap forward made by hardware), it no longer needs to remain a barrier for companies who want to make use of these technologies.<br />No, we are not publishing "The Idiot's Guide to HPC", nor have I learned of any top secret government programs that successfully implement knowledge transfer. There are, however, resources available for manufacturers with challenging questions or simulations which could improve their processes. We will look at one approach in this article, with others to follow in future blog entries.<br /><a href="http://www.osc.edu/">The Ohio Supercomputer Center</a> is located in <a href="http://maps.google.com/maps?f=q&hl=en&geocode=&q=columbus,+ohio&ie=UTF8&z=11&iwloc=addr">Columbus, OH </a>and provides the networking backbone for all Ohio Public schools and supercomputing facilities for Ohio Higher Education. They are a state funded organization which also includes researchers and programmers. So what does the Ohio State public school computer infrastructure have to do with your manufacturing plant? The OSC has also built and is actively growing a program called <a href="http://bluecollarcomputing.org/index.shtml">"Blue Collar Computing."</a> This program allows industry to work with the OSC resources and utilizes their hardware and software for a fee. Since they are a state funded organization, businesses within Ohio get a discounted price, but the service is open and available to all. <br />There are a variety of services available to companies which can help give a business the leg up and over the barriers of cost and difficulty. Since they are providing services on a computing as utility basis, they are especially well suited for companies that have one time or infrequent problems they need to solve. A company can either bring in code they have purchased or written themselves and buy CPU cycles on the OSC hardware, or experts at the OSC can work with businesses on a project basis to develop new software which will then run on the OSC hardware. This solves some of the problems of learning curve, but even the COTS applications can be confusing and overwhelming to new users. With an understanding of this problem, the HPC community at large is moving toward building web portals, or easy to use desktop clients designed for a much more user friendly interface. OSC is no exception to this and are working with local industry consortia to find the general case problems that can be distilled into easy to use web based applications that use the power of Supercomputing on the backend to speed up processing. One example of this is a weld-simulation tool which is currently in production; they are also in the process of developing a material mix calculation tool for the polymer industry and a plant floor optimization simulation tool.<br />Not sure if parallel computing, clusters or HPC even makes sense for your business? The experts at OSC can work with you to analyze your problems, your code and even do test development and performance analysis before you make the commitment to investment. They have actively worked with industry in this way to bring their expert knowledge to the IT staff of businesses, leap frogging the decision making process for the businesses.<br />OSC is not the only state owned supercomputing center to work with industry, there are centers in other states currently engaged with industry. We will highlight the successes of these other centers in future blog entries, along with the recent partnership of national supercomputing centers with industry, for example through the <a href="http://www.sc.doe.gov/ascr/INCITE/index.html">DOE's INCITE program</a>.<br />Have a hard problem that you think would make a great application? Have concerns or excitement about government and industry partnerships? Questions about HPC? Comment and let us know what you think.<br /><br /><strong>Nancy Glenn</strong> is a manufacturing solution design analyst and a contributor to <em>InTech</em> magazine.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-645080295640905848.post-68741683336992119642007-11-19T11:15:00.001-05:002008-02-23T14:07:18.008-05:00Life in HPC’s Fast Lane<p><strong>By Nancy Glenn</strong></p><p>How do you end up in Reno, Nev., when a chunk of the rest of the Manufacturing IT world is heading for <a href="http://maps.google.com/maps?f=q&hl=en&geocode=&q=chicago,+Ill&ie=UTF8&z=10&iwloc=addr">Chicago?</a> </p><p>I am sitting here in a lobby of <a href="http://sc07.supercomp.org/">the SC07 conference.</a> That is SC, as in <a href="http://en.wikipedia.org/wiki/Super_computing">Super Computer</a>, not <a href="http://maps.google.com/maps?f=q&hl=en&geocode=&q=south+carolina&ie=UTF8&z=8&iwloc=addr">South Carolina</a>. Not the normal place to find a plant floor IT gal. Surrounded by computers with hundreds of CPUs, capable of many <a href="http://en.wikipedia.org/wiki/Teraflop">Teraflops</a> of calculations, and IT folks who debate parallel processing algorithms between bites of conference pastry, you might wonder if I got lost on the way to the <a href="http://www.ab.com/automationfair/summary.html">Rockwell Automation Fair</a>. But the truth is, the use of <a href="http://en.wikipedia.org/wiki/High_performance_computing">High Performance Computing (HPC)</a> has been growing in industry in the last few years. Not only is it used to speed up highly complex simulations, or to make Shrek render faster, it is also used by some businesses for Supply Chain Calculations. With such uses wedging the plant floor from both sides, it was inevitable it would leak down to the plant floor eventually. It has become so pervasive in industry that this year the SC committee added an entire track of talks and case study presentations on HPC in industry. From Boeing to Proctor & Gamble, folks in a range of industries are here talking about what does and doesn’t work. </p><p>So what is HPC, and how might it apply to your company? The answer is not as easy or clear as we might like. We will begin with some definitions and background, but that will be followed with a series of posts for those interested in following the HPC technology and learning more what other people are doing. We will try to look critically at what technology is available, what works and what doesn’t. What is still in development and what is production ready. Barriers for HPC, and models for overcoming those. And of course, how to meet ROI. Like anything else, this is a tool good for some problems, but not for others. Have a burning question or a topic you want to see tackled? Comment below, and we will either answer there (if it is brief) or roll it into a future blog post. We will also try to point out along the way where HPC just does not make sense. This is not about the cool, but about becoming leaner, faster, and more competitive. As a matter of fact, <a href="http://www.compete.org/">The U.S. Council on Competiveness </a>has identified HPC as a critical factor in improving the flexibility and competiveness of businesses.<br /><br />HPC is defined as any compute process that uses multiple processors in parallel. This could be a single muti-core machine, a cluster of single core machines, or huge multi CPU (potentially with muti core processors) computers that can compute hundreds of Teraflops. Problems that make sense to tackle in this way are ones that can be broken into small pieces, which are not interdependent and can be run in parallel, with the results collected and compiled in a final step. This could be many things from large data set sorting to digital image rendering. Many simulations are a good target for this sort of speed gain, as well as tasks as simple as histograming very large, complex data sets.<br /><br />Typically, we think of HPC as requiring supercomputers—huge massive computers at the top of the class in size and speed. Computers like that are out of the price range of 99.999% of us, and silly even to consider. However, with recent technology advances, multi-processor computers can be purchased for under $2, 000. This puts them in almost anyone’s price range; now the biggest challenge is learning to program in ways that take advantage of the extra compute power. With tons of compute power at your fingertips, it becomes tempting to try to solve every problem by just throwing more compute power at it, but even at faster speeds, this consumes cycles of time—a precious commodity on the plant floor. We will try to look at how you ask the really good useful questions, and how you weed out the ones that will just waste time.<br /><br />What’s hype and what’s not? Do the vendors really supply what they promise? These are all questions we will tackle as this blog moves forward. Let’s make it a community discussion: Be sure to comment, question, or tell us when you think we have missed the mark. If you are currently using HPC in any way, we would love to hear from you too. Be sure to comment or e-mail and let us know.<br /><br />To get the ball rolling, you can contact me at <a title="mailto:mfghpc@gmail.com" href="mailto:mfghpc@gmail.com">mfghpc@gmail.com</a> or you can just respond to this posting.<br /><br /><strong>Nancy Glenn</strong> is a manufacturing solution design analyst and a contributor to <em>InTech</em> magazine.</p>Unknownnoreply@blogger.com0