Federated Learning (FL) is an emerging machine learning approach that allows multiple entities (a.k.a. parties or clients) to collaboratively train a machine learning model under the coordination of an aggregator (a.k.a server) without directly revealing their private training data. Due to privacy or regulatory constraints, many industry sectors, for example, healthcare, banking and retailers, have growing interests in employing FL to facilitate model training across multiple data centers. However, employing the basic version of federated learning usually is not good enough to meet privacy needs. In particular, not directly sharing raw training data does not guarantee full privacy protection, as there are lots of attacks on model parameters to learn sensitive information about training data, such as membership inference attacks, data reconstruction attacks, and property testing attacks, etc. These existing threats call for cryptographic techniques to further protect FL systems. In this demo, we will walk through the importance of protecting FL with cryptographic techniques, discuss high-level basics of fully homomorphic encryption (FHE) and present an end-to-end implementation of FHE in our IBM FL library. Moreover, we will conduct a comprehensive comparison of the possible communication and computation costs for deploying FHE in FL. To conclude, we will summarize the existing challenges and opportunities in securing FL.