Domov technika CPU architecture

CPU architecture



Basic concepts

The central processing unit (CPU, Central Processing Unit) is a very large-scale integrated circuit, which is the core and control unit of a computer. . Its function is mainly to interpret computer instructions and process data in computer software. The central processing unit mainly includes arithmetic units (arithmetic logic unit, ALU, Arithmetic Logic Unit) and high-speed buffer memory (Cache) and the bus (Bus) that realizes the data (Data), control and status of the connection between them. It, together with internal memory (Memory) and input/output (I/O) equipment, is called the three core components of an electronic computer. The central processing unit structure mainly includes von Neumann structure, Harvard structure, overlapping structure, pipeline structure, parallel processing structure and so on.

The basic structure of the CPU

From a functional point of view, the internal structure of a general CPU can be divided into: control unit, logic operation unit, storage unit (including internal bus and buffer). Part. Among them, the control unit completes the deployment of the entire process of data processing, the logic unit completes each instruction in order to obtain the final desired result of the program, and the storage unit is responsible for storing the original data and the result of the operation. The integrated coordination makes the CPU have powerful functions, which can complete many complex operations including floating point, multimedia and other instructions, and also adds more vitality to the digital age.

Logic components

English Logic components; arithmetic logic components. It can perform fixed-point or floating-point arithmetic operations, shift operations, and logic operations, as well as address operations and conversions.

Register

Register components, including registers, special registers and control registers. General-purpose registers can be divided into fixed-point numbers and floating-point numbers. They are used to store register operands and intermediate (or final) operation results temporarily stored during instruction execution. General-purpose registers are one of the important parts of the central processing unit.

Control component

The control component is mainly responsible for decoding instructions and issuing control signals for each operation to be performed in order to complete each instruction. There are two structures: one is a micro-program control mode with micro-storage as the core; the other is a control mode based on a logic hard-wired structure.

The micro-storage holds micro-codes. Each micro-code corresponds to one of the most basic micro-operations, also called micro-instructions; each instruction is composed of different sequences of micro-codes, and this micro-code sequence constitutes Microprogram. After the central processing unit decodes the instruction, it sends out a certain timing control signal, and executes a number of micro-operations determined by these micro-codes in the order of a given sequence with micro-cycles as the beat to complete the execution of a certain instruction. Simple instructions are composed of 3 to 5 micro-operations, while complex instructions are composed of dozens of micro-operations or even hundreds of micro-operations.

The logical unit of the CPU

A little bit more detailed, from the perspective of the realized functions, the CPU can be roughly divided into the following eight logical units:

(1) Instruction register: It is the instruction storehouse on the chip. With it, the CPU does not need to stop and look up the instructions in the computer's memory, thus greatly increasing the CPU's computing speed.

(2) Instruction decoder: It is responsible for interpreting complex machine language instructions into a simple format that arithmetic logic unit (ALU) and registers can understand, just like a diplomat.

(3) Control unit: Since instructions can be stored in the CPU, and there are corresponding instructions to complete the preparatory work before calculation, there is naturally a role behind it that plays a driving role-it is responsible for the entire processing process Operation controller. According to the instruction from the decoding unit, it will generate control signals to tell the arithmetic logic unit (ALU) and registers how to operate, what to operate on, and how to process the result.

(4) Register: It is very important for the CPU. In addition to storing part of the instructions of the program, it is also responsible for storing pointer jump information and loop operation commands. It is the arithmetic logic unit (ALU) for completion A small storage area for data used by the task requested by the control unit. The data source can be any of the cache, memory, and control unit.

(5) Logic Operation Unit (ALU): It is an intelligent component of the CPU chip, capable of executing various commands such as addition, subtraction, multiplication, and division. In addition, it also knows how to read logical commands such as OR, AND, and NOT. The message from the control unit will tell the arithmetic logic unit what to do, and then the arithmetic unit will intermittently or continuously extract data from the register to complete the final task.

(6) Prefetch unit: PU performance is very dependent on it. The prefetch hit rate is directly related to the CPU core utilization rate, which in turn brings about the difference in instruction execution speed. According to the requirements of the command or the task to be executed, at any time, the prefetch unit may obtain data and instructions from the instruction cache or computer memory. When instructions arrive, the most important task of the prefetch unit is to ensure that all instructions are arranged correctly, and then sent to the decoding unit.

(7) Bus unit: It is like a highway, which quickly completes the data exchange between various units, and it is also the place where data flows into and out of the CPU from the memory.

(8) Data Cache: Stores specially marked data from the decoding unit for use by the logic operation unit. At the same time, it also prepares the final results distributed to different parts of the computer.

Through the above introduction, it can be seen that although the CPU is small, it can accommodate the big world in a square inch. The interior is more like a developed assembly factory, interlocking and layered. It is precisely because of mutual cooperation that the instructions are finally executed, which constitutes a magical digital world with pictures, texts, and images.

The architecture of the CPU

The following is an introduction to the von Neumann structure, Harvard structure and parallel processing structure of the CPU.

Von Neumann architecture

(Von Neumann architecture), also known as Princeton architecture, is a conceptual structure of computer design that combines program instruction memory and data memory. Figure 1 is a structural diagram of von Neumann:

This structure vaguely guides the concept of separating the storage device from the central processing unit. Therefore, a computer designed according to this structure is also called a stored-program computer. The earliest computing machines contained only fixed-purpose programs. Some modern computers still maintain this design, usually for simplification or educational purposes. For example, a calculator only has a fixed mathematical calculation program, it cannot be used as word processing software, let alone used for playing games. If you want to change the program of this machine, you must change the circuit, change the structure or even redesign the machine. Of course, the earliest computers were not programmable as they were designed. At that time, the so-called "rewriting program" probably referred to the steps of paper-and-pencil design program, and then the engineering details were worked out, and then the circuit wiring or structure of the machine was changed by construction.

The concept of stored-program computers changed all of this. By creating a set of instruction set structure, and transforming the so-called operation into a series of program instructions, the machine is more flexible. By treating instructions as a special type of static data, a stored-program computer can easily change its program and change its calculation content under program control. Von Neumann structure and stored-program computer are mutually common terms, and their usage will be as follows. The Harvard structure is a design concept that separates program data from ordinary data, but it does not completely break through the von Neumann structure.

Saving the concept of a program can also allow self-modification of the calculation content of the program when the program is executed. One of the design motives of this concept is to allow the program to add content or change the memory location of the program instructions, because the early design requires the user to modify it manually. But as the index register and indirect location access become a necessary mechanism for the hardware structure, this function is not as important as it used to be. The feature of program self-modification is also discarded by modern programming because it will cause difficulty in understanding and debugging, and the pipeline and caching mechanism of modern central processing units will reduce the efficiency of this function.

Harvard architecture

Harvard architecture is a memory structure that separates program instruction storage and data storage. The central processing unit first reads the content of the program instructions from the program instruction storage, decodes the data address, reads the data from the corresponding data storage, and performs the next operation (usually execution). Program instruction storage and data storage are separated. Data and instruction storage can be carried out at the same time, allowing instructions and data to have different data widths. For example, the program instructions of Microchip's PIC16 chip are 14-bits wide, while data is 8-bits wide.

The microprocessor of Harvard structure usually has higher execution efficiency. The program instructions and data instructions are organized and stored separately, and the next instruction can be read in advance during execution. The structure diagram is shown in Figure 2:

Parallel processing structure

Although the first four generations of computers developed so far have great differences in hardware and performance, they are all from one Basic design-derived from the Von Neumann processor. These machines are sequential, that is, one processing unit is used to complete one operation at a time.

Their control components are centralized and sequential, the memory is linear addressing, fixed-width, and they use low-level sequential machine language. In order to increase the speed of sequential machines, it is necessary to increase the operating speed of each component. The speed required by the fifth-generation computer will be much higher than the speed that can be achieved with this method. Therefore, the fifth-generation computer will be a parallel machine, and its architecture will enable the computer to complete many operations at once. The development of parallel machines has been going on for many years, and the array processor has proven its vitality from a commercial point of view. This kind of machine succeeds mainly because it is well integrated with the traditional von Neumann programming method of sequential instruction flow.

Michael Flynn of Stanford University in the United States described the characteristics of this machine as: Single Instruction Stream Multiple Data Stream (SIMD) structure. The SIMD machine is best suited to deal with the problem of regular dense array bands. Image processing, matrix operations and physical simulation and other issues. They are not as versatile as a single processor, but generally serve as an additional processor to the Von Neumann mainframe.

Tento článek je ze sítě, nereprezentuje pozici této stanice. Uveďte prosím původ dotisku
HORNÍ