Geophysical Prospecting for Big Data HPC Challenges Bigger

Recently, Intel Corporation held the sixth high-performance computing and computing seminar in Nanjing on the theme of “Chi-Ling Yunhai is derived from Chuangxin”. At the conference, high-performance computing experts from Intel and the energy industry users at the conference made in-depth exchanges. The ZDNet server channel will cover the contents of this seminar from the perspective of experts and users in the form of channels and blogs.

Mr. Lai Nenghe is the chief engineer of China Petroleum Institute of Oriental Geophysics. ZDNet server channel reporter interviewed him after the seminar, first understand the development status of domestic oil exploration.

Before introducing the current status of domestic oil exploration, it is necessary to introduce the role of high-performance computing in petroleum exploration. Everyone's understanding of exploration still remains in the traditional verification of well digging. Field exploration personnel carry a variety of special equipment to detect through a series of geological exploration methods, and even predict whether there is oil in the strata of the detected area. , and then put a few experimental oil wells in this area to verify its accuracy.

This approach is already out of date, because the distribution of oil is not a typical "basin" shape, but is composed of "vine clusters" that are isolated from each other. Therefore, in the face of inaccurate survey results, people can only drill more wells. Try and only succeed in drilling oil wells to the top of an oil-covered "grape" - however, the cost of drilling wells for experimentation is very high. In order to increase the capacity and efficiency of oil exploration and reduce its cost, oil companies soon began to adopt more advanced geophysical methods in exploration, especially the seismic wave method.

The so-called seismic wave method, in a nutshell, is the use of explosives to stimulate artificial seismic waves on the ground. Such seismic waves can be transmitted deep into the ground and form different reflected waves when they encounter different geological formations. These reflected waves pass through the ground. After the geophones are collected and converted into electronic signals, they can be stored as data. Through the calculation and processing of these data, one can clearly model the subsurface geological structure from which the survey area was restored, and find the precise layers of rock that contains oil or natural gas. position.

So to what extent has PetroChina's high-performance computing exploration application been developed?

The current high-performance computing application software related to seismic wave method oil exploration can be divided into seismic data processing and reservoir simulation according to the nature of computation.

Using High Performance Computing to Simulate Reservoir Distribution

From the point of view of application characteristics, seismic data processing is a typical floating-point computation-intensive application. It uses the data-intensive wave equation as the main calculation mode. Therefore, it requires a higher floating-point calculation capability and moderate memory bandwidth occupancy. The processing of a large number of shots requires good multi-core scalability. Unlike the seismic data processing software whose algorithm is based on spectrum calculation, the requirement of the reservoir simulation for the computing platform is that it needs to support the iterative solution of the sparse matrix equations. It requires very high memory bandwidth and requires large buffer support. Software can be classified as compute-intensive applications that are highly memory bandwidth sensitive.

Mr. Lai Nenghe first introduced the basic situation of the research center of China Petroleum Eastern Geophysical Institute. He said that the research institute of the Eastern Geophysical Corporation has changed greatly in the past year. At present, there are more than 23,000 processors in the data center, and the number of cores has reached more than 75,000, with a theoretical peak of 695 petaflops. As far as the processing center is concerned, the number of GPUs is 936, which is about 450,000 cores and can achieve 576 petaflops.

Mr. Lai Nenghe told the author that today's high-performance computing has encountered many problems, such as multi-processor co-working, CPU + GPU mode of energy management, but the most important is the storage bottleneck and data encountered in big data applications. safe question.

He said that in the context of big data, high-performance computing is frequently used for data with dozens of terabytes or even hundreds of terabytes. How to save and back up data has encountered very difficult problems. Lai believes that the computer age has actually entered the era of storage. In the past, storage devices only appeared as auxiliary products and auxiliary devices of the server. For example, the server itself had stored disks, and the external storage was only used to store data. But today, investment in storage equipment accounts for more than half of IT investment.

For the China Petroleum Institute of Oriental Geophysics, the daily increase in data volume needs to be measured in terabytes. Mass storage, massive data how to manage? The key to this is two issues: one is whether performance can be improved and the other is how to manage. He said that the Institute of Oriental Geophysics is gradually using a parallel system, from the traditional storage architecture such as DAS to NAS, and self-built SAN systems, and gradually improve the performance and efficiency of the storage system through digital management and monitoring.

In this regard, Lai always adopts a self-developed GPFS file system architecture. For the current hot Hadoop architecture, Lai expressed interest and said that he will do related tests in this area in the future.

In addition, although the trend of big data applications has been seen, big data is still not very common, and tens of terabytes of data may be applied only 2-3 times in a year. Therefore, distributed architectures are still needed more often. To solve. For example, the Research Institute of PetroChina Oriental Geophysical Company is specializing in building a data processing system with high-density and massive data. This system is fully equipped with the most advanced Xeon E5-2600 series processors, equipped with 128GB or 256Gb of memory and a large-capacity storage system (5TB or so), and the network will use 10Gigabit network to provide powerful performance and good Stability.

Last year's problem tracking (before interviewing Lai's problem tracking):

1. Before the Institute of Oriental Geophysics adopted many single-channel servers, now?

A: There are mainly two-way roads, four-way and eight-way roads. Because in the past due to the SMP architecture, the application of memory allocation restrictions, so using four will lead to memory shortage. In terms of seismic interpretation, single-, dual-, four- and eight-way servers are used. Lai Neng and the chief engineer said that the current eight-way server is fully operational.

2. In April, when you were on stage at the HP Gen8 server conference, you were preaching that you were testing the SL250 Gen8 product. How is it now?

A: It wasn't long before we purchased orders. The SL250 Gen8 series are HP's highly scalable servers. Now they are mainly used for GPU-accelerated operations, mainly for complex parallel computing such as reverse-time operations.

2. For this piece of big data, you said that you will invest about 50 million. What kind of system will be used mainly?

A: It will mainly purchase two-way server products to build high-density mass computing systems with a memory of approximately 128GB and 5TB of storage. The network is considering the use of Infiniband or 10GbE products.

Stator Cleating

Stator Core assembly as a important process, there are kinds of technologies to assemble the stator core. Different sizes of stator cores have different technological processes and application fields. For example, small-sized stator cores are usually assembled by interlocking, and the outer diameter is usually less than 200mm. Large-sized stators are assembled in different ways depending on the motor design, such as stator core by cleating and staor core by welding. We are able to process all methods for stator core assembly based on customers' requirements.

Stator Core By Cleating,Splint Stator Core,Generator Stator Core,Stator Core Of Induction Motor

Henan Yongrong Power Co., Ltd , https://www.hnyongrongglobal.com