Select Page

In a multiple-bus organization, the CPU utilizes separate data paths for instructions, operands, and results, improving efficiency compared to a single-bus system. Here’s how a complete instruction is executed in this architecture:

Instruction Fetch Cycle:

  1. PC (Program Counter) to MAR (Memory Address Register): The address of the next instruction stored in the Program Counter (PC) is transferred to the MAR.
  2. Instruction Fetch: The control unit initiates a memory read operation, sending the address from the MAR to the memory unit.
  3. Memory Access: The memory unit retrieves the instruction at the specified address.
  4. MDR (Memory Data Register) to IR (Instruction Register): The fetched instruction is loaded from the Memory Data Register (MDR) into the Instruction Register (IR).
  5. PC Update: The PC is incremented to point to the address of the next instruction in sequence (unless a branch instruction is encountered).

Instruction Decode and Execution Cycle:

  1. Instruction Decode: The control unit decodes the instruction in the IR to determine the operation to be performed (e.g., add, subtract, load, store) and identify any operands required.
  2. Operand Fetch (if necessary): For instructions requiring operands, separate memory read operations are initiated using the operand addresses found within the instruction itself or retrieved from registers. These operands are transferred to specific CPU registers.
  3. ALU Operation (if applicable): If the instruction involves arithmetic or logical operations, the operands in the designated registers are sent to the Arithmetic Logic Unit (ALU) for processing. The result is stored in a designated register.
  4. Result Store (if applicable): For store instructions, the result from the ALU or another register is written back to memory at the specified address using a separate memory write operation.

Benefits of Multiple-Bus Organization:

  • Increased Throughput: By having dedicated data paths, the CPU can fetch instructions, access operands, and store results simultaneously, accelerating overall processing speed.
  • Reduced Bottlenecks: Unlike a single bus where all data transfers share the same path, multiple buses prevent congestion and conflicts, leading to smoother execution.

Drawback:

  • Increased Complexity: Designing and manufacturing a CPU with multiple data paths requires more complex hardware compared to a single-bus system.

In essence, a multiple-bus organization allows the CPU to fetch instructions, operands, and store results concurrently, optimizing the instruction execution cycle and enhancing the overall performance of the processor.