<p dir="ltr">Edge AI increasingly runs into the von Neumann bottleneck: energy and latency are dominated by moving weights and activations rather than by arithmetic. This thesis proposes a programmable digital compute-in-memory (DCIM) SoC built around Ibex, a 32-bit RISC-V core that provides the control plane while a DCIM engine performs bit-level multiply–accumulate directly in SRAM. The engine is designed to be software-friendly and bus-compatible: all control/status and scratchpad windows appear as ordinary memory-mapped regions that strictly follow the Ibex LSU req/gnt/valid protocol. To balance flexibility and efficiency, the engine introduces a compact four-instruction ISA—full-macro move, block move, DCIM-MAC, and write-back—that orchestrates data motion and compute across banked SRAM macros and 8T DCIM SRAM macros. A reconfigurable distribution network injects operands, while a multi-format shift-and-adder tree (with exponent processing for FP) performs reduction and normalization. A key contribution is multi-format support for INT8, FP8, INT16, and FP16, using contiguous FP storage (sign–exponent–mantissa kept together) to simplify addressing and eliminate format-specific memory layouts. The design adopts 1D tiling with fine-grained access and computation, enabling a continuous mapping that sustains near-100% memory utilization, avoids padding, and—critically—eliminates wide partial sums, which typically inflate buffers and bandwidth in CIM pipelines. The result is a programmable accelerator that retains RISC-V portability while exposing domain-specific performance via a minimal engine ISA and an on-engine microprogram/command stream. The thesis details the SoC organization, DCIM micro-architecture, instructions formats; and positions the work against recent RISC-V–integrated accelerators spanning near-memory and fully digital CIM. Collectively, the architecture demonstrates how tight RISC-V coupling plus a format-aware DCIM datapath deliver high utilization and practical programmability for modern edge inference.</p>