Page 84 - ITU Journal, ICT Discoveries, Volume 3, No. 1, June 2020 Special issue: The future of video and immersive media
P. 84

ITU Journal: ICT Discoveries, Vol. 3(1), June 2020



          [61] D. B. Skillicorn. Foundations of parallel program-  [74] D. Wang, A. Khosla, R. Gargeya, H. Irshad,
              ming.  Number 6. Cambridge University Press,         and A. H. Beck.    Deep learning for identify-
              2005.                                                ing metastatic breast cancer.  arXiv preprint
                                                                   arXiv:1606.05718, 2016.
          [62] S. U. Stich. Local sgd converges fast and communi-
              cates little. arXiv preprint arXiv:1805.09767, 2018.  [75] H. Wang, S. Sievert, S. Liu, Z. Charles, D. Papail-
                                                                   iopoulos, and S. Wright. Atomo: Communication-
          [63] S. U. Stich, J.-B. Cordonnier, and M. Jaggi. Sparsi-  efficient learning via atomic sparsification. In Ad-
              fied sgd with memory. In Advances in Neural Infor-   vances in Neural Information Processing Systems,
              mation Processing Systems, pages 4447–4458, 2018.    pages 9850–9861, 2018.

          [64] P. Subramanyan, R. Sinha, I. Lebedev, S. Devadas,  [76] J. Wang and G. Joshi.  Cooperative sgd:  A
              and S. A. Seshia. A formal foundation for secure     unified framework for the design and analysis
              remote execution of enclaves. In Proceedings of the  of communication-efficient sgd algorithms. arXiv
              2017 ACM SIGSAC Conference on Computer and           preprint arXiv:1808.07576, 2018.
              Communications Security, pages 2435–2450. ACM,
              2017.                                            [77] N. Wang, J. Choi, D. Brand, C.-Y. Chen, and
                                                                   K. Gopalakrishnan. Training deep neural networks
          [65] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence    with 8-bit floating point numbers. In Advances in
              to sequence learning with neural networks. In Ad-    neural information processing systems, pages 7675–
              vances in neural information processing systems,     7684, 2018.
              pages 3104–3112, 2014.
                                                               [78] W. Wen, C. Xu, F. Yan, C. Wu, Y. Wang, Y. Chen,
          [66] H. Tang, X. Lian, S. Qiu, L. Yuan, C. Zhang,        and H. Li. Terngrad: Ternary gradients to reduce
              T. Zhang, and J. Liu.    Deepsqueeze:  Paral-        communication in distributed deep learning. arXiv
              lel stochastic gradient descent with double-pass     preprint arXiv:1705.07878, 2017.
              error-compensated compression.  arXiv preprint   [79] S. Wiedemann, H. Kirchoffer, S. Matlage, P. Haase,
              arXiv:1907.07346, 2019.                              A. Marban, T. Marinc, D. Neumann, T. Nguyen,
                                                                   A. Osman, D. Marpe, et al. Deepcabac: A univer-
          [67] H. Tang, X. Lian, M. Yan, C. Zhang, and J. Liu. D   sal compression algorithm for deep neural networks.
              ^2: Decentralized training over decentralized data.  arXiv preprint arXiv:1907.11900, 2019.
              arXiv preprint arXiv:1803.07068, 2018.
                                                               [80] S. Wiedemann, A. Marban, K.-R. Müller, and
          [68] A. Tjandra, S. Sakti, and S. Nakamura. Tensor de-   W. Samek. Entropy-constrained training of deep
              composition for compressing recurrent neural net-    neural networks. In 2019 International Joint Con-
              work. In 2018 International Joint Conference on      ference on Neural Networks (IJCNN), pages 1–8.
              Neural Networks (IJCNN), pages 1–8. IEEE, 2018.
                                                                   IEEE, 2019.
          [69] M. Tu, V. Berisha, Y. Cao, and J.-s. Seo. Reduc-  [81] S. Wiedemann, K.-R. Müller, and W. Samek. Com-
              ing the model order of deep neural networks using    pact and computationally efficient representation of
              information theory. In 2016 IEEE Computer So-        deep neural networks. IEEE Transactions on Neu-
              ciety Annual Symposium on VLSI (ISVLSI), pages       ral Networks and Learning Systems, 31(3):772–785,
              93–98. IEEE, 2016.                                   2020.

          [70] P. Vanhaesebrouck, A. Bellet, and M. Tommasi.   [82] B. Wu, X. Dai, P. Zhang, Y. Wang, F. Sun, Y. Wu,
              Decentralized collaborative learning of personalized  Y. Tian, P. Vajda, Y. Jia, and K. Keutzer. Fbnet:
              models over networks. 2017.                          Hardware-aware efficient convnet design via differ-
                                                                   entiable neural architecture search. In Proceedings
          [71] D. R. Varma. Managing dicom images: Tips and        of the IEEE Conference on Computer Vision and
              tricks for the radiologist. The Indian journal of    Pattern Recognition, pages 10734–10742, 2019.
              radiology & imaging, 22(1):4, 2012.
                                                               [83] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville,
          [72] T. Vogels, S. P. Karimireddy, and M. Jaggi.         R. Salakhudinov, R. Zemel, and Y. Bengio. Show,
              Powersgd: Practical low-rank gradient compres-       attend and tell: Neural image caption generation
              sion for distributed optimization. arXiv preprint    with visual attention. In International Conference
              arXiv:1905.13727, 2019.                              on Machine Learning, pages 2048–2057, 2015.
          [73] P. Voigt and A. Von dem Bussche. The eu gen-    [84] T.-J. Yang, Y.-H. Chen, and V. Sze. Designing
              eral data protection regulation (gdpr). A Practical  energy-efficient convolutional neural networks us-
              Guide, 1st Ed., 2017.                                ing energy-aware pruning. In Proceedings of the





          62                                 © International Telecommunication Union, 2020
   79   80   81   82   83   84   85   86   87   88   89