ARQUITETURA RISC E CISC PDF

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy. See our Privacy Policy and User Agreement for details.

Author:Vudal Tall
Country:Panama
Language:English (Spanish)
Genre:Software
Published (Last):27 December 2009
Pages:15
PDF File Size:15.34 Mb
ePub File Size:11.68 Mb
ISBN:900-7-27753-937-7
Downloads:62302
Price:Free* [*Free Regsitration Required]
Uploader:Moogut



A superscalar processor is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor. In contrast to a scalar processor that can execute at most one single instruction per clock cycle, a superscalar processor can execute more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to different execution units on the processor.

It therefore allows for more throughput the number of instructions that can be executed in a unit of time than would otherwise be possible at a given clock rate. Each execution unit is not a separate processor or a core if the processor is a multi-core processor , but an execution resource within a single CPU such as an arithmetic logic unit.

While a superscalar CPU is typically also pipelined , superscalar and pipelining execution are considered different performance enhancement techniques. The former executes multiple instructions in parallel by using multiple execution units, whereas the latter executes multiple instructions in the same execution unit in parallel by dividing the execution unit into different phases.

The superscalar technique is traditionally associated with several identifying characteristics within a given CPU :. Seymour Cray 's CDC from is often mentioned as the first superscalar design. The Motorola MC , the Intel i CA and the AMD -series microprocessors were the first commercial single-chip superscalar microprocessors. RISC microprocessors like these were the first to have superscalar execution, because RISC architectures free transistors and die area which can be used to include multiple execution units this was why RISC designs were faster than CISC designs through the s and into the s.

Except for CPUs used in low-power applications, embedded systems , and battery -powered devices, essentially all general-purpose CPUs developed since about are superscalar. The P5 Pentium was the first superscalar x86 processor; the Nx , P6 Pentium Pro and AMD K5 were among the first designs which decode x86 -instructions asynchronously into dynamic microcode -like micro-op sequences prior to actual execution on a superscalar microarchitecture ; this opened up for dynamic scheduling of buffered partial instructions and enabled more parallelism to be extracted compared to the more rigid methods used in the simpler P5 Pentium ; it also simplified speculative execution and allowed higher clock frequencies compared to designs such as the advanced Cyrix 6x The simplest processors are scalar processors.

Each instruction executed by a scalar processor typically manipulates one or two data items at a time. By contrast, each instruction executed by a vector processor operates simultaneously on many data items. An analogy is the difference between scalar and vector arithmetic. A superscalar processor is a mixture of the two. Each instruction processes one data item, but there are multiple execution units within each CPU thus multiple instructions can be processing separate data items concurrently.

Superscalar CPU design emphasizes improving the instruction dispatcher accuracy, and allowing it to keep the multiple execution units in use at all times. This has become increasingly important as the number of units has increased.

If the dispatcher is ineffective at keeping all of these units fed with instructions, the performance of the system will be no better than that of a simpler, cheaper design. A superscalar processor usually sustains an execution rate in excess of one instruction per machine cycle. But merely processing multiple instructions concurrently does not make an architecture superscalar, since pipelined , multiprocessor or multi-core architectures also achieve that, but with different methods.

In a superscalar CPU the dispatcher reads instructions from memory and decides which ones can be run in parallel, dispatching each to one of the several execution units contained inside a single CPU.

Therefore, a superscalar processor can be envisioned having multiple parallel pipelines, each of which is processing instructions simultaneously from a single instruction thread.

Existing binary executable programs have varying degrees of intrinsic parallelism. In some cases instructions are not dependent on each other and can be executed simultaneously. In other cases they are inter-dependent: one instruction impacts either resources or results of the other. When the number of simultaneously issued instructions increases, the cost of dependency checking increases extremely rapidly.

This is exacerbated by the need to check dependencies at run time and at the CPU's clock rate. This cost includes additional logic gates required to implement the checks, and time delays through those gates. Even though the instruction stream may contain no inter-instruction dependencies, a superscalar CPU must nonetheless check for that possibility, since there is no assurance otherwise and failure to detect a dependency would produce incorrect results.

No matter how advanced the semiconductor process or how fast the switching speed, this places a practical limit on how many instructions can be simultaneously dispatched. While process advances will allow ever greater numbers of execution units e. Collectively the power consumption , complexity and gate delay costs limit the achievable superscalar speedup to roughly eight simultaneously dispatched instructions. However even given infinitely fast dependency checking logic on an otherwise conventional superscalar CPU, if the instruction stream itself has many dependencies, this would also limit the possible speedup.

Thus the degree of intrinsic parallelism in the code stream forms a second limitation. Collectively, these limits drive investigation into alternative architectural changes such as very long instruction word VLIW , explicitly parallel instruction computing EPIC , simultaneous multithreading SMT , and multi-core computing.

With VLIW, the burdensome task of dependency checking by hardware logic at run time is removed and delegated to the compiler. Simultaneous multithreading SMT is a technique for improving the overall efficiency of superscalar processors. SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures.

Superscalar processors differ from multi-core processors in that the several execution units are not entire processors. A single processor is composed of finer-grained execution units such as the ALU , integer multiplier , integer shifter, FPU , etc. There may be multiple versions of each execution unit to enable execution of many instructions in parallel.

This differs from a multi-core processor that concurrently processes instructions from multiple threads, one thread per processing unit called "core". It also differs from a pipelined processor , where the multiple instructions can concurrently be in various stages of execution, assembly-line fashion. The various alternative techniques are not mutually exclusive—they can be and frequently are combined in a single processor.

Thus a multicore CPU is possible where each core is an independent processor containing multiple parallel pipelines, each pipeline being superscalar. Some processors also include vector capability. From Wikipedia, the free encyclopedia. This article includes a list of references , related reading or external links , but its sources remain unclear because it lacks inline citations. Please help to improve this article by introducing more precise citations.

October Learn how and when to remove this template message. Processor technologies. Data dependency Structural Control False sharing. Tomasulo algorithm Reservation station Re-order buffer Register renaming. Branch prediction Memory dependence prediction. Single-core Multi-core Manycore Heterogeneous architecture. History of general-purpose CPUs Microprocessor chronology Processor design Digital electronics Hardware security module Semiconductor device fabrication Tick—tock model. Parallel computing.

Process Thread Fiber Instruction window Array data structure. Multiprocessing Memory coherency Cache coherency Cache invalidation Barrier Synchronization Application checkpointing. Stream processing Dataflow programming Models Implicit parallelism Explicit parallelism Concurrency Non-blocking algorithm.

Ateji PX Boost. Category: parallel computing Media related to Parallel computing at Wikimedia Commons. Categories : Superscalar microprocessors Classes of computers Computer architecture Parallel computing. Hidden categories: Articles lacking in-text citations from October All articles lacking in-text citations All articles with unsourced statements Articles with unsourced statements from April Namespaces Article Talk.

Views Read Edit View history. Contribute Help Community portal Recent changes Upload file. By using this site, you agree to the Terms of Use and Privacy Policy.

SAE J826 PDF

Superscalar processor

.

BUDDY LEVY CONQUISTADOR PDF

Superescalar

.

GHOST IN THE WIRES MITNICK PDF

Oh no, there's been an error

.

Related Articles