As the Compute Express Link™ (CXL™) interconnect protocol is gaining in popularity, mainly driven by the promise of higher performance and lower latency for CPU to Device communication, many questions arise around expected latency improvement. In this presentation, we describe the data flow model for the 3 protocols that comprise CXL (CXL.io, CXL.cache, CXL.mem) in contrast to traditional PCI Express, and look at the implications in terms of latency at the system level. We then present a couple of specific use cases that would clearly benefit a lower latency CXL interconnect and conclude with a look at the PLDA Controller IP for CXL and the design features allowing optimal CXL performance in silicon chips.