Understanding Bus Mastering Dma: How Does It Work?

how does bus mastering dma work

Bus mastering is a bus architecture feature that allows a control bus to communicate directly with other components without needing to go through the CPU. Bus mastering increases the operating system's data transfer rate, conserves system resources, and boosts performance and response time. It is also referred to as first-party DMA, where the peripheral device itself transfers data to and from memory, as opposed to third-party DMA, where a system DMA controller facilitates the transfer.

shunauto

DMA controllers reside on the device card

Bus mastering allows the device to become the "master of the bus", taking control of the system bus to control transfers. This is achieved by using BR (Bus Request) and BG (Bus Grant) signals to take control of the system bus and access memory.

In a bus mastering system, the CPU and peripherals can each be granted control of the memory bus. When a peripheral becomes a bus master, it can directly write to system memory without involving the CPU, providing memory address and control signals as required.

This method of direct memory access is used to improve performance, as it allows the CPU to perform other operations while the transfer is in progress. It is particularly useful when the CPU cannot keep up with the rate of data transfer or when it needs to perform work while waiting for a relatively slow I/O data transfer.

Examples of devices that use bus mastering include hard disk controllers and NICs.

shunauto

DMA controllers can be on the motherboard

In contrast, modern IDE/ATA hard disks use first-party DMA transfers, also known as bus mastering. In this case, the peripheral device itself acts as the DMA controller and directly transfers data to and from the memory without relying on an external DMA controller or the CPU. This approach allows for more efficient data transfer and keeps CPU utilisation low.

Connect Card Convenience on Folsom Buses

You may want to see also

shunauto

DMA modes and inefficiencies

DMA modes refer to the methods computer systems use to transfer data between devices and memory without involving the CPU. Here are some common DMA modes:

Block Mode

Also known as burst mode, this mode allows the transfer of multiple data blocks in a single DMA operation. It reduces overhead by transferring consecutive data blocks without releasing control of the system bus after each block. This is the fastest mode of DMA transfer but is less user-friendly as the CPU will be blocked during the transfer.

Demand Mode

In this mode, the DMA controller only transfers data when the CPU requests it. It waits for a signal from the CPU before initiating a data transfer, reducing unnecessary data transfers and system bus contention.

Cycle Stealing Mode

This mode allows the DMA controller to temporarily take control of the system bus from the CPU during data transfer cycles. It steals CPU cycles to perform data transfers, allowing both the CPU and DMA controller to access the system bus alternately. This mode is slower than burst mode but more efficient as the CPU won't be blocked for the entire transfer.

Fly-by Mode

Also known as chain mode, this mode allows continuous data transfer between devices and memory without CPU intervention. The DMA controller continuously transfers data from one device to memory and then to another device in a sequential manner.

Bus Mastering

Bus mastering is a feature of many bus architectures that enables a device connected to the bus to initiate direct memory access (DMA) transactions. It is also known as first-party DMA, where the peripheral device itself does the work of transferring data to and from memory, becoming the "master of the bus". This allows for efficient data transfer between the hard disk and system memory while keeping CPU utilisation low.

Third-Party DMA

Third-party DMA, also known as standard or conventional DMA, uses a DMA controller on the motherboard to coordinate DMA transfers. This is also referred to as bus mastering, where the DMA controller acts as the bus master and communicates directly with memory or other devices without involving the CPU.

Single-Ended DMA

In this type of DMA, data transfer occurs in one direction only, either from the peripheral device to memory or vice versa. It involves a single channel for communication, making it easier to implement and understand.

Dual-Ended DMA

Dual-ended DMA allows bidirectional data transfers between the I/O device and memory. The DMA controller can initiate read and write operations independently, enhancing efficiency and improving overall system performance.

Arbitrated-Ended DMA

In arbitrated-ended DMA, multiple devices on a bus compete for access to memory, with a central arbiter deciding which device gets priority. This ensures fair access and prevents bus monopolisation by a single device. Arbitration optimises data flow by managing competing requests effectively.

Interleaved DMA

Interleaved DMA allows multiple devices to transfer data simultaneously, with data divided into smaller blocks or packets that are transferred alternatingly. This optimises system performance by minimising idle times and maximising throughput.

Programmed I/O DMA

In programmed I/O DMA, the CPU directly controls data transfers between peripheral devices and memory, initiating each transfer by issuing commands. This method may be less efficient due to increased CPU intervention and slower processing speeds.

Inefficiencies in DMA

While DMA improves data transfer efficiency, there are some inefficiencies and disadvantages to consider:

  • Complexity: Implementing DMA increases the complexity of system design and development, requiring careful management of DMA controllers, memory access, and bus arbitration.
  • Bus Contention: DMA controllers compete for access to the system bus, potentially leading to bus contention and performance degradation if multiple devices request access simultaneously.
  • Data Corruption: Improperly managed DMA transfers can result in data corruption or system instability due to errors in transferred data.
  • Security Concerns: DMA bypasses certain CPU-based security features, potentially exposing sensitive data to unauthorised access or tampering.
  • Compatibility Issues: Ensuring compatibility and optimising DMA performance across different hardware platforms and operating systems can be challenging.

shunauto

Bus-mastering vs third-party DMA

Bus mastering is a feature supported by many bus architectures that enables a device connected to the bus to initiate direct memory access (DMA) transactions. It is also referred to as first-party DMA. In a bus mastering system, the CPU and peripherals can each be granted control of the memory bus. When a peripheral becomes a bus master, it can directly write to system memory without the involvement of the CPU, providing memory address and control signals as required.

Third-party DMA, on the other hand, involves a system DMA controller that actually performs the transfer. The \"third party\" is the DMA controller, which is separate from the CPU and the peripheral device. In this case, the DMA controller generates memory addresses, initiates memory read or write cycles, and communicates with the CPU and peripheral device.

The main difference between bus mastering and third-party DMA lies in who controls the DMA process. In bus mastering, the peripheral device itself becomes the master of the bus and initiates the DMA transactions directly. In third-party DMA, a separate DMA controller is responsible for coordinating the DMA transfers.

In terms of performance, bus mastering can provide significant improvements, especially for general-purpose operating systems. It allows for more efficient data transfer and keeps CPU utilisation low. However, some real-time operating systems prohibit peripherals from becoming bus masters as the scheduler can no longer arbitrate for the bus.

Additionally, bus mastering and third-party DMA may differ in their implementation details. For example, bus mastering may require certain signals or protocols to enable a peripheral to become a bus master, such as BR (Bus Request) and BG (Bus Grant) signals. Third-party DMA controllers, on the other hand, may have specific hardware registers that can be written and read by the CPU to control the DMA process.

In summary, bus mastering and third-party DMA are two different approaches to achieving direct memory access. Bus mastering provides more direct control to the peripheral device, while third-party DMA relies on a separate DMA controller to manage the data transfers. The choice between the two depends on the specific system requirements, performance needs, and hardware capabilities.

shunauto

Bus-mastering and packet-based DMA

Bus mastering is a feature supported by many bus architectures that enables a device connected to the bus to initiate direct memory access (DMA) transactions. DMA is the generic term used to refer to a transfer protocol where a peripheral device transfers information directly to or from memory without the system processor being required to perform the transaction.

Bus-mastering DMA allows for the efficient transfer of data to and from the hard disk and system memory. It keeps CPU utilisation low, which is the amount of work the CPU must do during a transfer.

Packet-based bus-master DMA is a type of DMA where the bus-master adapter allows the driver to determine when a DMA transfer operation is done and/or when to begin another transfer operation for a given IRP.

To use packet-based DMA, drivers of bus-master DMA devices call the following general sequence of support routines as they process an IRP requesting a DMA transfer:

  • KeFlushIoBuffers: just before attempting to allocate map registers for a transfer request.
  • AllocateAdapterChannel: when the driver is ready to program the bus-master adapter for DMA.
  • MmGetMdlVirtualAddress: to get an index into the MDL, required as an initial parameter to MapTransfer.
  • MapTransfer: to make the system physical memory that backs the IRP's buffer device-accessible.
  • FlushAdapterBuffers: at the end of each DMA transfer operation to/from the target device, to determine whether all the requested data has been completely transferred.
  • FreeMapRegisters: as soon as all DMA operations for the current IRP are done, because all the requested data has been completely transferred or because the driver must fail the IRP due to a device or bus I/O error.
Injecting Heroin into Bus: Does it Work?

You may want to see also

Frequently asked questions

Bus mastering is a feature of bus architectures that enables a device connected to the bus to initiate direct memory access (DMA) transactions.

Bus mastering allows a control bus to access RAM independently from the CPU. It is designed to allow data transfer between a peripheral component and RAM while the CPU implements other responsibilities.

The GPU transfers the overall framebuffer to the video card using bus-mastering over PCI-e (in recent x86).

Bus mastering is also referred to as first-party DMA, in contrast with third-party DMA where a system DMA controller performs the transfer.

Common bus-mastering operations include the GPU transferring the overall framebuffer to the video card, the ethernet card transferring a received packet to main memory, and the hard disk transferring blocks.

Written by
Reviewed by
Share this post
Print
Did this article help you?

Leave a comment