Provided services
Backend Development, Custom Software Development, Frontend Development, Test Automation Services, Software Integration Services
Client
Our client is a recognized international provider of solutions for the travel industry, with many years of experience in developing technologies for efficient interaction with global distribution systems, such as GDS and Galileo.

Product
The product is a comprehensive solution in the tourism sector that facilitates interaction with global distribution systems, enabling clients to optimize booking processes and data exchange with high performance and stability when processing a large number of requests. One of the key features of the solution is its ability to provide a high level of analytics, logging, and fault tolerance.
Challenge
The client faced the challenge of creating technological processes for the product that would ensure predictable development with short iterations, a high level of stability control, and a solution that would be as simple as possible to maintain. An important task for the client was also building a flexible and scalable architecture that combined microservice components for load distribution across the system.
The challenge for our development team was to form an efficient group of specialists capable of creating a complex solution for a large-scale and demanding ecosystem. We needed to meet all high-quality standards, applying CI/CD and automated testing to ensure development stability and reliability. At the same time, the team and architects required deep expertise to integrate multiple frameworks and libraries, ensuring their seamless operation and maximum utility for the project. The specifics of the project demanded an enterprise-level solution from the team, ensuring resilience and reliability in transaction processing, particularly in commercial operations.
Solution
Adapting the development approach
To address the challenge set by the client, our team adopted an approach initially based on the waterfall model. However, upon our suggestion, the client agreed to transition to the Scrum methodology and shorter development iterations. This shift allowed the team to adapt more quickly to changes, receive timely feedback, and consistently release stable versions of the product with incremental changes. For the client, this approach enhanced their oversight of the development process, enabling them to regularly review the team’s output and effectively monitor the overall project progress.
Building a flexible and scalable architecture
Creating a flexible and scalable architecture that combined microservice components for load distribution was essential for the project’s success. Our team adopted a hybrid approach, combining traditional development practices with microservice architecture where justified. This combination allowed us to use the proven stability and reliability of traditional development for core and well-defined system parts, while microservices provided the needed flexibility, scalability, and rapid adaptability to new features and changes. By using this mixed approach, resources were allocated efficiently, eliminating the need for a full transition to microservices and thus reducing development costs and time. This strategic decision allowed the team to meet the client’s requirements effectively while optimizing their budget.
Forming a development team with broad technical expertise
To assemble a team capable of developing a complex backend solution for a large-scale, demanding ecosystem, we selected specialists with deep knowledge of the travel industry and practical experience in applying microservice architecture.
Mastering OTA schemas (including XSD and XML) was a significant challenge for the team, given the complexity and scope of the standard. OTA (Open Travel Alliance) is a comprehensive framework for data exchange in the travel industry, requiring detailed knowledge of its numerous schemas and protocols to enable effective integration with booking and travel management systems. The team had to navigate intricate data structures and validation rules, while also integrating these schemas with external systems and databases, adding further complexity. Ensuring compliance with industry standards was critical, as it demanded both technical accuracy and adherence to proper data handling practices essential for travel systems. These factors made learning and implementing OTA schemas (XSD and XML) challenging, especially when the project demanded rapid acquisition and application of new knowledge. This is where the team’s extensive experience in travel industry development proved invaluable.
The project required substantial expertise and experience in working with both types of databases, enabling the team to choose the most appropriate solutions based on specific task requirements. Skills in integrating and optimizing both types of databases within a single application, as well as configuring caching and using intermediate layers (e.g., Redis) to enhance access speed and minimize data processing delays, were also necessary. The project’s specifics demanded a deep understanding of JTA (Java Transaction API) for transaction management, which was particularly important for maintaining data integrity and system reliability.
One of the key challenges of the project was creating an easily expandable backend capable of effectively adapting to changing requirements and supporting complex business rules under high load. To address this, the team chose an approach that integrated Apache Kafka and Drools. Drools stands out as a powerful tool for managing business rules in the project due to its flexibility and modularity. By allowing business rules to be defined independently of the core codebase, Drools provided the team with the adaptability to quickly respond to new client requirements without disrupting the system. Its modular rule management simplified the process of updating and scaling business logic, which was critical for a complex project with numerous scenarios. Furthermore, Drools’ seamless integration with Java and its high performance in executing a large number of rules ensured the system maintained efficiency and reliability under demanding conditions.
Apache Kafka was used as a data streaming platform, ensuring real-time event processing and a decentralized system architecture. Kafka served as a message broker, transferring events and data between microservices and modules where Drools applied business rules. This integration enabled the system to respond to events instantly, maintaining high performance and resilience while processing large data volumes.
For API documentation and testing interactions between microservices and external systems, Swagger was employed. With its integration into the development involving Apache Kafka and Drools, as well as backend development, having clear and accessible documentation was crucial so that different teams and external integrators could easily understand and use the system’s API. Swagger supported a high level of transparency in development, giving the team and third-party users a comprehensive understanding of how the various services functioned and what data they accepted or returned.
Advanced analytics and high-performance solutions
One of the critical challenges of the project was ensuring a high level of analytics, logging, and system performance to create a reliable enterprise-level solution. The team needed to develop a system capable of handling large data volumes, ensuring fast processing, and maintaining stability even under heavy loads. This required advanced tools and approaches beyond standard solutions. To address this challenge, the team chose an unconventional approach involving Apache Solr.
Apache Solr was chosen for its robust full-text search and analytics capabilities, which significantly enhanced data handling in the system. Its use in the project was unconventional, as Solr was not only employed for traditional search functionalities but also for analytical tasks and data logging, extending its capabilities beyond standard applications. Integrating Solr into the microservice architecture required innovative approaches to enable distributed data storage and boost analytical performance, demonstrating its versatility. By addressing specific challenges like improving system performance and ensuring high-speed search with minimal delays, Solr proved instrumental in processing large data volumes effectively, often replacing the need for custom development or alternative technologies. Thus, the use of Apache Solr was non-standard, as it was adapted for functions beyond its conventional use.
Ensuring Comprehensive Testing and CI/CD Integration
The team adhered to high-quality standards, applying CI/CD and automated testing to ensure stable and reliable development. Given the large number of scenarios to be handled at the backend level, the team decided to create an automated testing framework based on Robot Framework.
Robot Framework enabled the creation and maintenance of complex test scenarios, helping identify and fix errors promptly. It also allowed the team to integrate testing into CI/CD processes, ensuring continuous quality checks at every development stage. Additionally, testing complex UI components was conducted using Selenide, a tool that allowed the team to perform detailed testing of the user interface, verifying complex interaction scenarios.
By using Robot Framework and Selenide, the team was able to comprehensively approach the testing process, ensuring the reliability of both the backend and the user interface.
Result
As part of the project, a high-performance backend for a complex system was developed, fully meeting the client’s stringent quality and performance requirements. Comprehensive automated testing of all backend components was conducted using advanced tools, ensuring the stability and reliability of the solution. The system was optimized to handle peak loads, with thorough performance measurements performed and necessary adjustments made to maintain high performance levels.
A complete set of scripts for building, deploying, and installing application components was developed, including the integration of containerization processes to ensure system flexibility and scalability.
Technologies
Databases: MySql
Frontend: JQuery, CodeMirror, Vkbeautify, AngularJS, TreeView
Backend: Jetty, Groovy, Jaxb2, Reflection, Xuggle (video recording), Сustom plugin architecture
Test Automation: Selenium, WebDriver, Mockito, JUnit
CI/CD and DevOps: Jenkins, Maven, Nexus
Languages, Protocols, APIs, Network Tools: XML/XSLT/XSD/XPath
Software Engineering and Management Tools: Git+Gerrit (extreme programming tool)