Existing solutions for creating cloud systems are universal, they do not take into account the specifics of working with big data – the possibility of parallelizing and duplicating network streams (alternative routing and switching), a large amount of memory and computations consumed per analysis cycle (especially when using neural network analysis algorithms as in the learning process and in the process of use). This leads to the impossibility of deploying the structures responsible for processing big data in an arbitrary cloud.The proposed approach solves the problem of insufficient flexibility and efficiency in planning and orchestrating computing elements within cloud structures, allows the use of isolation and resource limitation through containerization and ensures network topological connectivity even in the absence of direct access to external IP addresses of nodes.
Period of the event2019
At the first stage of the project, a complex of structural models of a distributed heterogeneous multicloud computing system, a computational task for processing big data, a platform for automating distributed computing for processing big data using multi-cloud structures was developed. Methods for forecasting resource consumption of distributed computing applications for processing big data are also proposed.