Abstract Submission Opens : May 19, 2025

Early Bird Registration Date: June 25, 2025

Scientific Sessions

Scientific Sessions

Session 1Swarm robotics and multi-agent systems

Swarm robotics and multi-agent systems are fields within robotics and artificial intelligence that focus on the coordination and collective behavior of multiple autonomous agents or robots working together to achieve complex tasks.
Swarm robotics is inspired by the collective behavior of social insects like ants, bees, or birds. It involves large groups of relatively simple robots that cooperate using local rules and decentralized control without a central leader.
Multi-agent systems consist of multiple intelligent agents (which can be robots, software programs, or virtual entities) that interact, cooperate, or compete to solve problems that are difficult for a single agent.

Session 2Database management systems

Database Management Systems (DBMS) are crucial software applications that facilitate the creation, management, and manipulation of databases. They provide an organized way to store, retrieve, and manage data, ensuring its integrity, security, and accessibility. DBMSs allow users to perform various operations on the data, such as querying, updating, and reporting, using a structured query language (SQL) or other interfaces. They support transaction processing, which ensures that data modifications are performed accurately and reliably, even in the event of system failures. By abstracting the complexities of data storage and management, DBMSs enable efficient data handling and facilitate the development of data-driven applications across various domains, from business and finance to research and healthcare.

Session 3Operating Systems

Operating systems (OS) are fundamental software that manage computer hardware and provide essential services for application programs. They act as an intermediary between users and the computer hardware, ensuring efficient resource allocation and enabling multitasking. Key functions of an OS include managing hardware resources like the CPU, memory, and storage, as well as providing user interfaces, managing files, and coordinating input and output operations. Popular operating systems include Microsoft Windows, macOS, Linux, and Android, each designed to cater to specific needs and environments. The evolution of operating systems has been driven by advances in technology, aiming to enhance performance, security, and user experience.

Session 4Robotics & Autonomous Systems

Robotics and Autonomous Systems is a field of computer science and engineering focused on designing, building, and programming robots and intelligent machines capable of performing tasks without human intervention or with minimal supervision.
Robotics involves the development of physical machines (robots) equipped with sensors, actuators, and control systems to interact with the environment
Autonomous systems extend robotics by incorporating intelligent decision-making capabilities enabling machines to operate independently in complex, dynamic environments.

Session 5Quantum error correction

Quantum error correction (QEC) is a set of techniques designed to protect quantum information from errors caused by decoherence, noise, and other quantum disturbances. Since quantum bits (qubits) are extremely fragile, QEC is essential for building reliable and scalable quantum computers.
Quantum error correction encodes a logical qubit into a larger number of physical qubits, spreading information redundantly to detect and correct errors without measuring the quantum information directly (which would collapse the quantum state).

Session 6Quantum Computing

Quantum computing is an advanced field of computing that leverages the principles of quantum mechanics to process information in fundamentally new ways, promising to solve certain problems much faster than classical computers.
Quantum computing represents a paradigm shift with the potential to revolutionize computing, cryptography, and scientific research. While still in early stages, ongoing advances could unlock new capabilities beyond classical limitations.

Session 7Data mining and pattern recognition

Data mining is the process of exploring large datasets to identify meaningful patterns, trends, and relationships. It combines techniques from statistics, machine learning, and database systems to extract knowledge from data. Pattern recognition is the science of automatically detecting patterns and regularities in data, often used to classify data into categories based on learned features.
Both data mining and pattern recognition are essential for making sense of vast data, enabling automated systems to discover insights, make predictions, and support intelligent decisions in many fields like finance, healthcare, and image processing.

Session 8Data Science & Big Data

Data Science and Big Data are interconnected fields that focus on extracting meaningful insights from large and complex datasets using a combination of statistics, machine learning, and computational tools
Big Data refers to datasets that are too large, fast, or complex to be processed using traditional tools
Data Science provides the techniques to derive actionable insights, while Big Data provides the scale and complexity of modern data challenges. Together, they power innovations across sectors—from business intelligence to healthcare and smart cities

Session 9Cybersecurity & Privacy

Cybersecurity involves the strategies, technologies, and practices used to protect digital assets—including software, hardware, and data—from cyber threats such as hacking, viruses, phishing, ransomware, and denial-of-service (DoS) attacks.
Privacy focuses on the responsible collection, use, and sharing of personal data. It ensures individuals have control over how their information is used and that organizations comply with data protection laws.
Cybersecurity and privacy are closely related fields focused on protecting digital systems, networks, and information from unauthorized access, misuse, and damage, while ensuring that individuals’ personal data is handled responsibly and lawfully.

Session 10Artificial Intelligence (AI) and Machine Learning (ML)

AI refers to the broader concept of machines being able to carry out tasks in a way that we would consider “smart” — such as reasoning, problem-solving, understanding language, and perception.
ML is a subset of AI where systems learn from data and improve their performance over time without being explicitly programmed.

Session 11Multimodal learning

Multimodal learning is an area of machine learning and artificial intelligence that focuses on processing and integrating multiple types of data (modalities)—such as text, images, audio, video, and sensor data—to improve understanding, reasoning, and prediction.
Multimodal learning is crucial for creating intelligent systems that understand the world like humans do, through the integration of multiple sensory inputs. It is at the forefront of research in areas like AI-driven perception, robotics, healthcare, and multimedia analysis.

Session 12Network protocols and simulation

Network protocols and simulation are key areas in computer networking that deal with how data is transmitted across networks and how network behavior can be analyzed and tested under various conditions.
Network protocols are a set of standardized rules that define how devices communicate over a network. They ensure that data is sent, received, and interpreted correctly across diverse hardware and software systems.
Network simulation is the process of using software tools to model and test network behavior in a virtual environment before actual deployment. It helps researchers and engineers evaluate performance, scalability, reliability, and security of networks.

Session 135G/6G and next-generation networking

5G, 6G, and next-generation networking represent the evolving landscape of wireless communication technologies, aiming to deliver ultra-fast, low-latency, and highly reliable connectivity to support emerging applications like autonomous systems, immersive experiences, and massive-scale IoT.
5G is the current generation of mobile networks, offering significant improvements over 4G in terms of speed, latency, and capacity.
6G is the next wave of wireless technology, currently in research and expected to launch commercially in the 2030s. It builds on 5G with even more ambitious goals.

Session 14Internet of Things (IoT)

The Internet of Things (IoT) refers to a network of interconnected physical devices—such as sensors, appliances, vehicles, and machines—that collect, exchange, and act on data using the internet, often without human intervention.

IoT is transforming how we interact with the physical world by enabling smarter environments, automated systems, and data-driven decision-making across industries. As it evolves, it will play a central role in future technologies like smart cities, autonomous vehicles, and AI-powered environments.

Session 15Cloud computing and edge computing

Cloud computing and edge computing are modern computing paradigms that address how data is processed, stored, and accessed across networks—but with different approaches to location, latency, and scalability.

Cloud computing delivers computing services—such as servers, storage, databases, networking, software, and analytics—over the internet (“the cloud”). It allows users to access powerful infrastructure without managing physical hardware.
Edge computing pushes computation and data storage closer to the data source (i.e., at the “edge” of the network), rather than relying solely on centralized cloud servers.

Session 16Networking & Distributed Systems

Networking and Distributed Systems are areas of computer science that focus on how multiple computers communicate and collaborate to perform tasks, share resources, and maintain system reliability—often across geographic distances.
Computer networking involves the design and management of communication systems that allow data to be transferred between devices. It forms the backbone of the internet and most modern computing environments.
Distributed systems consist of multiple interconnected computers that work together as a single system. These systems aim to provide scalability, fault tolerance, and resource sharing across various nodes.

Session 17Formal methods and logic in computing

Formal methods and logic in computing refer to the use of mathematically-based techniques for the specification, development, and verification of software and hardware systems. They help ensure that systems are correct, reliable, and secure, particularly in safety-critical domains.
Formal methods involve creating precise mathematical models of software or hardware behavior. These models allow developers to prove properties such as correctness, consistency, and absence of bugs before systems are implemented or deployed.
Logic forms the foundation of formal methods and is used to reason about computation, algorithms, and system behavior.

Session 18Computational complexity

Computational complexity is a field in theoretical computer science that studies the inherent difficulty of computational problems and classifies them based on the resources required to solve them, such as time (how fast) and space (how much memory).
Computational complexity provides a deep theoretical framework for understanding the limits of computation. It tells us not only how to solve problems, but whether some problems are even solvable within practical constraints.

 

Session 19Algorithms and data structures

Algorithms and data structures are fundamental concepts in computer science that deal with how data is organized and how computational problems are solved efficiently.
An algorithm is a step-by-step procedure or set of rules designed to perform a specific task or solve a problem. Algorithms are evaluated based on their correctness, efficiency (in terms of time and space), and scalability.
A data structure is a way of organizing and storing data so it can be accessed and modified efficiently. The choice of data structure affects the performance of algorithms.

Session 20Theoretical Computer Science

Theoretical Computer Science (TCS) is the branch of computer science that focuses on the mathematical and abstract foundations of computing. It seeks to understand the limits of what computers can do, how efficiently problems can be solved, and how to model computation itself.

Session 21Human-robot interaction

Human-Robot Interaction (HRI) is an interdisciplinary field focused on studying, designing, and evaluating the ways humans and robots communicate, collaborate, and coexist safely and effectively.
Human-Robot Interaction is critical to making robots more accessible, trustworthy, and effective partners in various aspects of daily life and work. By improving HRI, we can foster smoother, safer, and more meaningful collaboration between humans and machines.

Session 22Software Engineering

Software engineering is the systematic application of engineering principles to the design, development, testing, deployment, and maintenance of software systems. It aims to produce high-quality, reliable, and scalable software in a cost-effective and efficient manner

Session 23Automated testing and debugging

Automated testing and debugging are essential practices in software development aimed at improving code quality, reliability, and development efficiency by using tools and scripts to detect errors and ensure correct behavior without manual intervention.
Automated testing involves writing test scripts that automatically execute and verify whether a program behaves as expected. These tests can be run repeatedly, especially after code changes, to catch regressions and bugs early in the development process.

Session 24Code generation using AI

Code generation using AI refers to the use of artificial intelligence—particularly machine learning and natural language processing models—to automatically write, complete, or suggest code based on human input or existing code context.

AI code generation systems can take natural language prompts (e.g., “write a function to sort a list”) or partial code snippets and generate functional code in a target programming language. These tools are trained on vast code repositories and can understand patterns, syntax, and even coding best practices.

Session 25Human-Computer Interaction (HCI)

Human-Computer Interaction (HCI) is a multidisciplinary field focused on the design, evaluation, and implementation of interactive computing systems for human use, and the study of how people interact with these systems.
HCI aims to make technology more usable, efficient, accessible, and enjoyable by understanding user behavior and designing systems that align with human capabilities and needs.

Session 26Federated and distributed learning

Federated learning is a decentralized approach to training machine learning models where data remains on local devices (such as smartphones, IoT devices, or edge servers). Instead of collecting data in a central server, the model is trained locally on each device, and only model updates (e.g., gradients or weights) are sent back to a central server. The server then aggregates these updates to form a global model.

Distributed learning refers to a method of training machine learning models by splitting the computation across multiple machines, often in a data center or cloud environment. This is especially useful for training large-scale models (e.g., deep neural networks) that would be too slow or memory-intensive to train on a single machine.