Slowness is relative, out in the analogue world, based on context such as what speed is expected, and physical aspects dilating time, such as how fast things move relative to one another and the difference between the gravitational fields they are in. But slowness is even more relative in the digital world, doubly relative to wall-clock time, including all of the physics behind it, and to CPU time, with its varying relationship to wall-clock time.
Out in the analogue world we can look up at the sky and immediately tell that it's noon, or observe the changing energy levels of the electrons in our atomic clock to tell time with higher precision. We can also trust our watch makers and look towards our wrists, or ask our voice activated personal assistants.
As a piece of software, similarly, you tell time by asking the operating system. Using the Linux operating system we generally have access to eight different clocks (Linux man-pages project 2020), the most important being CLOCK_REALTIME which is the so called wall-clock time in seconds since Unix Epoch time 00:00:00 UTC on 1 January 1970. The motherboard assists with a special chip that is powered by the CMOS battery so as to not forget the time when the computer is turned off. As a human this is often the time you are interested in, but it can sometimes change if a user or a program updates the time, e.g. if a leap second needs to be added or the clock has drifted and is updated. Imagine you are recording the order of events and suddenly the time is two minutes earlier than before messing up all of your ordering. For this reason we have been given CLOCK_MONOTONIC_RAW, a clock that strictly follows the arrow of time and never jumps around, but starts at some unspecified time in the past.
Sometimes you only care about the time that your program has spent executing instructions on the CPU, as opposed to time spent waiting for its turn or for other things to finish. That is when you might use CLOCK_PROCESS_CPUTIME_ID, conveniently converted from CPU cycles to wall-clock time for you.
CPU cycles can take a varying amount of wall-clock time. Deciding how much time a CPU cycle takes is the job of the CPU Clock, an oscillator that switches between low and high voltages (0 and 1) with a given speed to create a clock signal. (Petzold, 2000)
Modern CPUs vary their clock frequency to be more efficient with their power use which means that the relationship between CPU cycles and wall-clock time is in constant flux. As clock signal generation technology progresses, this relationship will get more and more fluid. (Xiu, 2017)
For an in depth technical analysis of the real time kernel, see Timing analysis of the PREEMPT RT Linux kernel (de Oliveira and de Oliveira, 2015)
Slowness, in the context of buffer overruns and underruns, is considered catastrophic because of the audio glitches,so kernel and audio developers alike put great effort into eliminating it.
Sound usually happens in time. It is the variation of density in a medium over time that is the physical sound that we hear. When translating acoustic sound into the digital domain the fundamental difference between analogue and digital becomes clear: sound is continuous, but digital is discrete. The original continuously varying signal of the sound wave gets measured at regular intervals, the sampling frequency, with finite precision. The precision is therefore limited in both time and amplitude. Converting a digital sound into an analogue signal, the opposite process is applied and a circuit does its best to fill in the waveform between the digital samples, usually oversampling the digital signal with linear interpolation between samples (Manning 2013, pp. 254-255).
When this conversion from digital to analogue audio happens the samples must be converted at the correct speed, the sampling frequency, but until then they are independent of time. Purely digital audio manipulation happens with whatever speed the CPU is currently running at and doesn't care about how many seconds or minutes pass. When time does matter, during playback, every sample is played with the exact same speed.
Therefore, while digital audio manipulation tools are able to change the perceived speed of a recorded sound to a great extent, the digital sound signal itself is neither slow nor quick.
That is one type of latency. Another type of latency is in the low level workings of the kernel itself. When a piece of software is running on a CPU it can get interrupted by other tasks, or not allowed to start because of other tasks already running. Eliminating this kind of latency is one of the tasks of the real time kernel, and one of the first uses of the Linux real time kernel was for audio. This lead to the development of the ftrace tool that this project now makes use of artistically. (Rostedt, 2009)
”Clock signal is the mechanism that establishes the flow-of-time. Without it, the order for all events cannot be arranged and, consequently, no useful tasks can be performed. ”
(Xiu, 2017, p. 28)
For digital audio, however, there is a specific way in which slowness can occur: latency. When digital audio is produced, a certain number of audio samples have to be delivered to the sound card before they are scheduled to be played. If the software producing the audio fails to meet this requirement, a buffer underrun occurs and pops and clicks are heard in the audio stream. The buffer therefore needs to be sufficiently large so that we can be sure that an interruption in the sound processing software does not cause it to miss its sound deadline. A larger buffer introduces a latency between the time that a signal (e.g. playing a note on a digital keyboard) is received and the corresponding sound is heard.
Linux man-pages project (2020), 'CLOCK_GETRES(2)', available: https://www.man7.org/linux/man-pages/man2/clock_getres.2.html (accessed 2020-05-27)
Xiu, L. (2017). Clock technology: The next frontier. IEEE Circuits and Systems Magazine, 17(2), 27-46.