Parallelization methods have change into ubiquitous for accelerating inference and coaching of deep neural networks. Regardless of this, a number of operations are nonetheless carried out in a sequential method. For example, the ahead and backward passes are executed layer-by-layer, and the output of diffusion fashions is produced by making use of a sequence of denoising steps. This sequential method ends in a computational value proportional to the variety of steps concerned, presenting a possible bottleneck because the variety of steps will increase. On this work, we introduce DeepPCR, a novel algorithm which parallelizes usually sequential operations so as to pace up inference and coaching of neural networks. DeepPCR relies on deciphering a sequence of steps as the answer of a selected system of equations, which we recuperate utilizing the Parallel Cyclic Discount algorithm. This reduces the complexity of computing the sequential operations from to , thus yielding a speedup for big . To confirm the theoretical decrease complexity of the algorithm, and to determine regimes for speedup, we check the effectiveness of DeepPCR in parallelizing the ahead and backward go in multi-layer perceptrons, and attain speedups of as much as for the ahead and for the backward go. We moreover showcase the flexibleness of DeepPCR by parallelizing coaching of ResNets with as many as 1024 layers, and technology in diffusion fashions, enabling as much as sooner coaching and sooner technology, respectively, when in comparison with the sequential method.