Page 24 - ITU Journal Future and evolving technologies – Volume 2 (2021), Issue 2
P. 24

ITU Journal on Future and Evolving Technologies, Volume 2 (2021), Issue 2




          and potential locations for edge clouds placement and  equally developed an algorithm to perform ef icient
          represented it by a graph    =    ∪   ,    where    repre‑  of loading decision in the presence of multiple fog nodes.
          sents the base stations,    is the set of potential locations  Achieving device‑driven intelligence refers to equipping
          for edge clouds placement and    is the connection   devices with smarter functionalities such as sensing,
          between two base stations. The steps to solve the edge  computing, storage, smart data processing, networking
          placement problem are established according to the   services and communication; human‑driven intelligence
          minimum communication latency between two base       associates human domain data with network‑domain
          stations and the minimum workload of each edge cloud.  decisions that will bene it the network [41].
          The scheme takes as input a set of base stations and
          edge clouds, and returns the optimal locations of edge  The article presents two case studies, user‑behavior‑
          clouds. It  irst  inds out if there is an edge located at a  driven healthcare monitoring and device‑driven adaptive
          given location, if a base station is allocated to a given  task of loading. The  irst case study involves using a
          edge cloud and if the base station is associated with an  machine learning technique‑based health monitoring
          edge cloud; then, it de ines a  itness function such that  module to create a non‑complex ML model that detects
          the edge placement problem is transformed into a single  human activities driving the sampling of an adaptive sen‑
          objective optimization problem by using a weighted   sor and scheduling scheme of MAC using some data and
          sum method. This problem is solved by selecting the  accelerometer sensors. The second case study depicts an
          locations with minimal communication delay using K‑  environment involving an end user with N independent
          Means algorithm and simplifying the workload allocation  tasks, where each task has the possibility to be of loaded
          problem using a mixed integer quadratic programming  to a computer processor of any of the available fog nodes
          algorithm, and then solving it using the Boolean Quadric  or processed locally by the end user’s computer proces‑
          Polytope cutting plane method. The proposed approach  sor; for each task, the user must decide the appropriate
          is however not the most ef icient; change of workload  CPU to be used to process it with the objective to reduce
          size during the allocation is not taken into consideration,  delay and energy consumption. The energy consumption
          which makes the solution less reliable.              and latency minimization problem is a mixed‑integer
                                                               nonlinear programming that is solved by  irst transform‑
          4.3 Energy consumption and latency mini‑             ing the problem into a corresponding uniform Quadratic
               mization during data of loading                 Constrained Quadratic Programming (QCQP), dropping
                                                               the rank‑one limitation, which makes the QCQP problem
          A system that minimizes execution latency during the  SemiDe inite Programming (SDP) convex and can be
          migration of a mobile web worker from mobile device to  cleared up using the interior point method, and then
          an edge server and provides its seamless of loading was  constructing a number of reasonable solutions based
          proposed by Jeong, Hyuk‑Jin, et al. [40]. In the system,  on Gaussian randomization, and  inally choosing the
          the intact web app that has computation‑intensive codes  solution, which minimizes the objective function over all
          executed in a web browser, is run by a mobile client.  solutions. The shortcoming associated with this solution
          When accessible edge servers are detected by the client,  is the fact that intelligence in fog computing is still in
          the mobile web worker manager is responsible for  ind‑  its infancy and the assumptions made are not realistic yet.
          ing the best server to process the worker, which reduces
          the delay between the time at which a request is sent by  Amir Erfan Eshratifar et al.  [42] introduced Bottle‑
          the main thread to the worker and the time at which a  Neck, a new deep learning architecture to reduce the
          result is received from the worker. Thus, the HTML5 web  workload size to be sent from the UE to the cloud, along
          worker is migrated across the cloud, the client, and the  with a training method to compensate for the poten‑
          edge, and keeps the of loading states while the mobile  tial accuracy loss that arises during the compression
          client switches its objective server. Web snapshots are  of the workload before its transmission to the cloud.
          used to move web workers by the system, by a script  BottleNeck is basically an auto‑encoder in which the
          written in JavaScript to restore the run‑time state of a  agent handles the responsibility of learning a compact
          web worker when this one is executed. The authors also  representation of the features in a transitional layer. It is
          highlighted issues of generating a snapshot code that  a novel partitioning method that initializes a bottleneck
          restores both JavaScript objects and native data such as  in a neural network using the suggested BottleNeck unit.
          web assembly functions and built‑in objects.         Spatial, channel‑wise reduction units and compressor
                                                               units are used in its architecture on the mobile device
          To reduce energy consumption and latency in fog com‑  to generate a compact representation of the tensor
          puting architecture, Quang Duy La et al. [41] proposed  that is transmitted to the cloud. BottleNeck’s algorithm
          an approach that uses device‑driven and human‑driven  comprises three steps, which include training, pro iling
          intelligence as key enablers; it performs adaptive low  and selection. For a given number of locations in the
          latency Medium Access Control (MAC)‑layer scheduling  network, BottleNeck is placed on an arbitrary selected
          among sensor devices, and detects user behaviors, by  layer. Different architectures associated with degrees of
          applying machine learning techniques.  The authors   dimensionality reduction are trained along the channel





          10                                 © International Telecommunication Union, 2021
   19   20   21   22   23   24   25   26   27   28   29