Log in

No account? Create an account
July 3rd, 2007 - Adventures in Engineering — LiveJournal
The wanderings of a modern ronin.

Ben Cantrick
  Date: 2007-07-03 00:58
  Subject:   Many cores = exponentially worse contention.
  Tags:  parallel computing, reddit

One of the greatest challenges facing the designers of many-core processors is resource contention. The chart visually lays out the problem of resource contention, but for most of us the idea is intuitively easy to grasp: more cores and more simultaneous threads means more contention for shared resources, specifically cache space and memory bandwidth.

2 Comments | Post A Comment | | Link

Ben Cantrick
  Date: 2007-07-03 16:15
  Subject:   LOLMetal
  Music:Posehn - Metal By Numbers
  Tags:  humor, reddit

1 Comment | Post A Comment | | Link

Ben Cantrick
  Date: 2007-07-03 18:52
  Subject:   Building a mental model of concurrency.
  Tags:  parallel computing, reddit

1. Responsiveness and Isolation Via Asynchronous Agents - Stay responsive by running tasks independently and tasks asynchronously, communicating via messages.

2. Throughput and Scalability via Concurrent Collections - Use more cores to get the answer faster by running operations on groups of things; exploit parallelism in data and algorithm structures.

3. Consistency Via Safely Shared Resources - Avoid races by synchronizing access to shared resources, especially mutable objects in shared memory.

Post A Comment | | Link

Ben Cantrick
  Date: 2007-07-03 20:00
  Subject:   (Lots) More on Vishkin's XMT parallel chip.
  Mood:der uber-nerd
  Music:Kraftwerk - Home Computer
  Tags:  parallel computing

About the parallel RAM architecture.Collapse )

The XMT programming model is quite simple. See figure. Its primary example is a standard C programming language variant called XMT-C that adds only two basic commands. The main added command is spawn. The program alternates between serial mode and parallel mode. The spawn command can declare any number of concurrent threads and causes a switch from serial mode to parallel mode. Each thread advances through its program at its own speed till termination. When all threads terminate (depicted as Join in the figure), the program switches back to serial mode. Successive spawn commands can each declare a different number of threads. This brief description is not meant to replace a fuller description that can be found in technical presentations and papers below.


Vector Addition. You are given two vectors A and B of the same size m. The objective is to add these two vectors into a third vector C.

PRAM Pseudo-Code XMTC Code
for i = 0 to m-1 pardo
    C[i] = A[i] + B[i];
spawn(0, m-1)
    C[$] = A[$] + B[$];

In both cases, we create m virtual threads, which can run concurrently. The for-pardo structure in the pseudo-code and the spawn statement in the XMTC code are responsible from this task. Each thread reads one element from A and B, adds them, and writes the result in C.

Virtual threads are distinguished from each other by their unique thread ID. This thread ID is represented by i in the PRAM code, and by $ in the XMTC code.

Well, that part looks easy... what about synchronization?Collapse )
1 Comment | Post A Comment | | Link

May 2015