The rapid expansion of AI workloads has created scenarios where an 8-GPU server can consume up to 10kW, and an entire rack may exceed 100kW, emphasizing the vital need for power efficiency. This technical presentation provides an in-depth analysis of power loss across power topology, components, Basbar, PSUs, Power shelfs, PCBs, trays, GPU cards, and cables within AI servers and at the rack scale. It also offers proposals and examples for future optimization.