In-memory computing (IMC) is a promising hardware architecture to circumvent the memory walls in data-intensive applications, like deep learning. Among various memory technologies, static random-access memory (SRAM) is promising thanks to its high computing accuracy, reliability, and scalability to advanced technology nodes. This paper presents a novel multi-bit capacitive convolution in-SRAM computing macro for high accuracy, high throughput and high efficiency deep learning inference. It realizes fully parallel charge-domain multiply-and-accumulate (MAC) within compact 8-transistor 1-capacitor (8T1C) SRAM arrays that is only 41% larger than the standard 6T cells. It performs MAC with multi-bit activations without conventional digital bit-serial shift-and-add schemes, leading to drastically improved throughput for high-precision CNN models. An ADC-reduction encoding scheme complements the compact sram design, by reducing the number of needed ADCs by half for energy and area savings. A 576x130 macro with 64 ADCs is evaluated in 65nm with post-layout simulations, showing 4.60 TOPS/mm2 compute density and 59.7 TOPS/W energy efficiency with 4/4-bit activations/weights. The MC2 - RAM also achieves excellent linearity with only 0.14 mV (4.5% of the LSB) standard deviation of the output voltage in Monte Carlo simulations.