iJS Magazine   Volume 14 - React: From Implementation to Deployment

Preis: 0

Erhältlich ab:  Juli 2024

Autoren / Autorinnen: 
Manfred Steyer ,  
Sebastian Springer ,  
devmio Editorial Team ,  
Sebastian Springer ,  
Jaime Garcia ,  
Candide Bekono ,  
Jaime Garcia ,  
Simon Wijckmans

Angular is often used for the front end of large, mission-critical solutions. You must pay close attention to well-maintainable architecture while also avoiding over-engineering. Current features such as standalone components and standalone APIs can help.

This two-part series will show you how to reconcile both requirements, starting with implementing your strategic design. Based on Standalone Components and Standalone APIs, the architecture is implemented with the open-source project Sheriff. The examples used in this article can be found in my GitHub account here.

The Guiding Theory: Strategic Design from DDD

Strategic Design, one of the two original disciplines of Domain-Driven Design (DDD), is the guiding theory for structuring modern front-ends. At its core, this involves breaking down a software system into different functional subdomains. Let’s take an airline as an example. You might see the subdomains shown in Figure 1.

Alt text

Fig. 1: Domain section

To identify the individual domains, you must look at the business processes that you'll support.

The interaction between developers and architects on the one hand and domain experts on the other is essential. Workshop formats like event storming, which combine DDD with ideas from Agile software development, are ideal. The context maps represent the relationships and dependencies between individual domains

(Fig. 2). The goal is to decouple the individual domains from each other. The less they know about each other, the better. This prevents changes to one application part from affecting other parts, while improving maintainability. In larger projects, it’s common to assign one or more domains to each subteam (Box: "Domain vs. Bounded Context").

Alt text

Fig. 2: A simple context map

Strategic Design also provides a few other patterns and considerations that will help you implement loose coupling. In our example, this includes only a few services selected for booking or information about bookings distributed via messaging in the backend.

Domain vs. bounded context

Strictly speaking, a domain is mapped to one or more bounded contexts as part of the implementation, and each bounded context may contain one or more domains. Thus, the bounded context reflects the solution view, while the domain represents part of the problem view. The domain model of each Bounded Content reflects its subject matter, e.g., the structure and handling of flights and tickets. This domain model is only meaningful within the Bounded Context. Even if the same terms are used in other contexts, these contexts likely have a different view. For example, a flight looks different from a booking perspective than a boarding perspective; these two views are deliberately presented separately. The domain model then avoids mixing contexts and being a confusing model that tries to describe too much at once. For simplicity, the explanations in this article assume that one bounded context prevails per domain.

Transition to Source Code: the Architecture Matrix

For the implementation in the source code, further subdividing the individual domains into different modules makes sense (Fig. 3).

Alt text

Fig. 3: Architecture matrix

Categorizing these modules increases clarity. I suggest using the following categories, which have been useful in our daily work:

  • Feature: A feature module implements a use case or a technical feature with smart components. If you focus on a feature, such components aren’t easily reusable. Another characteristic of Smart Components is that they communicate with the backend. Typically, this communication in Angular takes place via a store or services.

  • UI: UI modules contain dumb or presentational components. These are reusable components that support the implementation of individual features but are not directly aware of them. Implementing a design system contains presentational components. However, UI modules can also contain general functional components that are used across use cases. An example would be a ticket component that ensures that tickets are displayed consistently in different features. These components often communicate with their environment only via properties and events without access to the backend or a store.

  • Data: Data modules contain the respective domain model (the client-side view) plus the services that operate on it. These services validate entities and communicate with the back end. State management, including the provision of ViewModels, can also be accommodated in data modules. This is especially useful if several features of the same domain are based on the same data.

  • Util: General auxiliary functions are accommodated in utility modules. Examples include logging, authentication, or working with date values.

Another feature to implement in code is the shared area, which provides code for all domains. It should primarily contain technical code– domain-specific code usually found in the individual domains.

The structure seen here brings order to the system. There are few discussions about where to find or place certain code sections. Based on this matrix, two simple but effective rules can be introduced.

  • Each domain can only communicate with its own modules, in the spirit of strategic design. An exception is the shared area, to which every domain has access.

  • Each module may only access modules in deeper layers of the matrix. In this case, a separate layer emerges from each module category.

Both rules support the decoupling of the individual modules or domains and help to avoid cycles.

Project Structure for the Architecture Matrix

The architecture matrix is represented in the source code in the form of folders. Each domain has its own folder, which in turn gets a subfolder for each of its modules (Fig. 4).

Alt text

Fig. 4: Folder structure with domains

The module names are the names of the respective module categories as a prefix. Modules are obvious at first glance due to their names and location in the architecture matrix. Within the modules, there are typical Angular building blocks such as components, directives, pipes, or services.

The use of Angular modules is no longer necessary since the introduction of standalone components (directives and pipes). Instead, the standalone flag is set to true (Listing 1). For components, the compilation context must also be imported. This includes all other standalone components, directives, and pipes that are used in the template.

Listing 1

@Component({
  selector: 'app-flight-booking',
  standalone: true,
  imports: [CommonModule, RouterLink, RouterOutlet],
  templateUrl: './flight-booking.component.html',
  styleUrls: ['./flight-booking.component.css'],
})
export class FlightBookingComponent {
}  

To define the public interface each module gets an index.ts file. The index.ts file is a barrel that specifies which module components may also be used outside the module:

export * from './flight-booking.routes'

You must act carefully when maintaining published constructs because breaking changes can affect other modules. However, anything not published here is an implementation detail of the module, and any changes are less critical.

Force Domain Pruning with Sheriff

The architecture discussed so far is based on several conventions:

  • Modules may only communicate with modules of the same domain as well as shared.
  • Modules may only communicate with modules of lower layers.
  • Modules may only access the public interface of other modules.

The open-source project Sheriff can enforce these conventions via linting. In case of non-compliance, an error message is output in the IDE (Fig. 5) or on the console (Fig. 6).

Alt text

Fig. 5: Sheriff in the IDE

Alt text

Fig. 6: Sheriff on the console

The former provides immediate feedback during development, whereas the latter can be automated in the build process. This can be used to prevent source code that violates the defined architecture from ending up in the main or dev branch of the source code repo.

When setting up Sheriff, the following two packages must be obtained via npm:

npm i @softarc/sheriff-core @softarc/eslint-plugin-sheriff -D 

The first involves Sheriff itself, the second is the binding to eslint. This is to be registered in the .eslintrc.json file in the project root (Listing 2).

Listing 2

{
  [...],
  "overrides": [
    [...]
    {
      "files": ["*.ts"],
      "extends": ["plugin:@softarc/sheriff/default"]
    }
  ]
} 

Sheriff considers any folder that contains an index.ts file to be a module. By default, Sheriff prevents other modules from bypassing this index file and thus accessing implementation details. The sheriff.config.ts file, which must be set up in the root of the project, specifies categories (tags) for the individual modules and defines dependency rules (depRules) based on them. Listing 3 shows a Sheriff configuration for the architecture matrix discussed above.

Listing 3

import { noDependencies, sameTag, SheriffConfig } from '@softarc/sheriff-core';
 
export const sheriffConfig: SheriffConfig = {
  version: 1,
 
  tagging: {
    'src/app': {
      'domains/<domain>': {
        'feature-<feature>': ['domain:<domain>', 'type:feature'],
        'ui-<ui>': ['domain:<domain>', 'type:ui'],
        'data': ['domain:<domain>', 'type:data'],
        'util-<ui>': ['domain:<domain>', 'type:util'],
      },
    },
  },
  depRules: {
    root: ['*'],
 
    'domain:*': [sameTag, 'domain:shared'],
 
    'type:feature': ['type:ui', 'type:data', 'type:util'],
    'type:ui': ['type:data', 'type:util'],
    'type:data': ['type:util'],
    'type:util': noDependencies,
  },
};  

The tags refer to folder names. Expressions like < domain > or < feature> are placeholders. Any module below src/app/domains/< domain > whose folder name starts with feature- will be assigned the categories domain: < domain > and type:feature. In the case of src/app/domains/booking, these would be the categories domain:booking and type:feature.

The dependency rules under depRules pick up the individual categories and define, for instance, that a module may only access modules of the same domain and domain:shared. Other rules define that each layer may only access layers below it. Thanks to the rule root: ['*'], all folders not explicitly categorized in the root folder and below are allowed to access all modules, which, mainly affects the application's shell.

Lightweight Path Mappings

Path mappings are a useful way to avoid unreadable relative paths within imports. These allow, for example, instead of:

import { FlightBookingFacade } from '../../data'

Use of the following formulation:

import { FlightBookingFacade } from '@demo/ticketing/data'

Such three-character imports are composed of the project or workspace name (e.g. @demo), the domain name (e.g. ticketing), and a module name (e.g. data), reflecting the desired position in the architecture matrix. Regardless of the number of domains and modules with a single path mapping within the tsconfig.json file in the project root, this notation can be enabled (Listing 4).

Listing 4

{
  "compileOnSave": false,
  "compilerOptions": {
    "baseUrl": "./",
    [...]
    "paths": {
      "@demo/*": ["src/app/domains/*"],
    }
  },
  [...]
} 

IDEs such as Visual Studio Code should be restarted after this change, ensuring any changes are taken into account.

Standalone APIs

The Angular team provides standalone APIs for registering libraries because standalone components have made the controversial Angular modules optional. Well-known examples are provideHttpClient and provideRouter (Listing 5).

Listing 5

bootstrapApplication(AppComponent, {
  providers: [
    provideHttpClient(),
    provideRouter(APP_ROUTES, withPreloading(PreloadAllModules)),
 
    importProvidersFrom(NextFlightsModule),
    importProvidersFrom(MatDialogModule),
 
    provideLogger({
      level: LogLevel.DEBUG,
    }),
  ],
}); 

Essentially, these are functions that return providers for the required services. The selection of these providers and the behavior of the library can be influenced by passing in a configuration object. An example of this is the route configuration that provideRouter receives.

From an architecture view, standalone APIs fulfill another purpose by permitting the regard of a system component to be further developed independently, such as Blackbox. The black box can also become a gray box by passing a configuration object. In this case, the system component’s behavior can adapt over well-defined settings without giving up the loose coupling. You could also keep the Open/Closed principle in mind, open for extensions (by configuration and/or Polymorphismus) or closed for modifications by the user.

As an example of a standalone API that sets up a logger, the provideLogger function is in Listing 5. Its implementation is depicted in Listing 6. The provideLogger function takes a partial LoggerConfig object. Consequently, the caller only has to worry about the parameters relevant to the current case. To get a complete LoggerConfig, provideLogger merges the passed configuration with a default configuration. Based on this, various providers are returned. The makeEnvironmentProviders function from @angular/core wraps the generated provider array with an object of type EnvironmentProviders. This type is useful when bootstrapping the application and within routing configurations. It then allows the provision of providers for the entire application or individual parts.

Listing 6

export function provideLogger(
  config: Partial<LoggerConfig>
): EnvironmentProviders {
  const merged = { ...defaultConfig, ...config };
 
  return makeEnvironmentProviders([
    LoggerService,
    {
      provide: LoggerConfig,
      useValue: merged,
    },
    {
      provide: LOG_FORMATTER,
      useValue: merged.formatter,
    },
    merged.appenders.map((a) => ({
      provide: LOG_APPENDERS,
      useClass: a,
      multi: true,
    })),
  ]);
} 

In contrast to a conventional Providerarray, EnvironmentProviders cannot be used within components. This restriction is quite deliberate because most libraries like the Router are conceived for component-spreading employment.

Summary

Strategic Design divides a system into different parts, which are implemented as independently as possible of each other. This decoupling prevents changes in one application area from affecting others. The architectural approach presented divides the individual domains into different modules, with the open source project Sheriff ensuring that the individual modules only communicate with each other according to the rules established.

This approach enables the implementation of large frontend monoliths that can be maintained over the long term. Because of their modular structure, they are also referred to as “moduliths”. One disadvantage of such architectures is increased build and test times. This problem can be solved by incremental builds and tests which will be covered in the second part of this article series.

Links & Literature

  1. https://github.com/manfredsteyer/modern-arc.git

  2. https://www.eventstorming.com

  3. https://go.nrwl.io/angular-enterprise-monorepo-patterns-new-book

  4. https://github.com/softarc-consulting/sheriff

We spoke with Billy Kovalsky, VP of Products at Sisense about Compose SDK for Fusion. Learn what its use cases are, its benefits, and how users can create a customized date experience in their applications.

devmio: Who is the ideal developer for using Sisense Compose SDK for Fusion? (Front-end, Back-end, Full-stack?)

Billy Kovalsky: Compose SDK for Fusion is designed for use with front-end frameworks so the ideal developer would be a front-end or a full-stack engineer.

devmio: How does the learning curve for Compose SDK for Fusion compare to traditional methods of integrating Sisense analytics?

Billy Kovalsky: Compose SDK for Fusion supports three main JavaScript frameworks: React, Angular, and Vue. These represent most of the frameworks used online in addition to TypeScript to serve integrated development environment code completion. If a developer knows one of these frameworks, it’s easy to learn Compose SDK for Fusion as it's mainly based on the frameworks’ fundamental foundations. With Compose SDK for Fusion, building data insights into applications using our analytics framework feels just as natural as using components from UI frameworks such as Material UI, Chakra UI, or React Bootstrap. The additional steps developers need to learn, such as the size and logic of how to query, represent a small portion of the work.

devmio: While Sisense highlights faster go-to-market, are there benefits to using Compose SDK for Fusion beyond just development speed? (e.g., deeper integration, improved user experience)

Billy Kovalsky: What’s most powerful about the platform is that developers can make analytic insights look and feel like they are a natural part of the app. Compose SDK for Fusion gives developers the tools to integrate embedded analytics into the flow of their work, gaining complete control over the way the analytics look and behave. By allowing for deep integration within CI/CD and the software development lifecycle, developers can build much more robust stable apps at scale, providing an enhanced user experience with rich, customized and contextualized analytics.

The platform addresses how to make the analytics dynamic so it can react to different user flows, and easily enable actionable insights within the core application code. To this end, developers can query and filter using Compose SDK for Fusion on the front end, according to what the user is trying to accomplish, while the analytics and application code share the same context. This eliminates the friction developers usually experience, caused by the logical separation of analytics and application code which is required by traditional embedding approaches.

devmio: Can you provide a specific example of how a company might use Compose SDK for Fusion to create a customized data experience within their application?

Billy Kovalsky: We created this video featuring a fictional e-commerce application with embedded analytics created with Compose SDK for Fusion. It used to be that store managers and employees only had access to dashboards showing analytics for measures such as sales. With Compose SDK for Fusion, end users can dive deeper into individual product analytics. For example, a user could quickly and easily see the ad performance on a specific product by simply hovering their mouse over the image of the product for which they want to gain insight. The analytics simply pop up right then and there without clicking for any further information.

Additionally, the user can open the AI chat assistant on any page of the application and ask questions to help them make better decisions. For example, when listing more products for sale or placing orders directly with the supply chain within the application, the user opens the AI chat popup and simply asks, “Which products had the highest growth in sales this week?” They then receive the insights directly in the conversation and can refine them through dialogue until they have exactly what they need to take action. By using this e-commerce application, the user achieves better outcomes by leveraging their own sales data to make better decisions at the point of action.

Sisense is looking to go further and to provide insights and trajectories featuring notifications for when more products would need to be stocked up, for example. This additional layer of insight is where the extra value comes as end users can query and build their own dashboards according to their roles within the store. For example, if I am the marketing manager, I can check for different data points, create analytic measures with those, and set them up in a dashboard that serves me the best. By contrast, a store manager could create their own measures with their own analytics view. We see the possibilities for providing more than just analytics but alerts for when inventory is low, for example, ahead of a busy holiday season with the recommendation to stock up.

devmio: What resources are available for developers who encounter challenges while using Compose SDK for Fusion? (Documentation, Forums, Support Channels)

Billy Kovalsky: There is a dedicated and extensive documentation website with guides and tutorials. Sisense also launched a playground where developers can see side-by-side code snippets and a live preview of what it will render for developers to experiment with. There is also an online community where developers can get answers to different questions and gain consultations. Finally, developers can reach out to our support department and get help.

devmio: How do the co-authored dashboards in Fusion Winter 2024 improve collaboration between different teams working on the same analytics project?

Billy Kovalsky: Co-authoring enables a collaborative way of working on analytical assets such as dashboards. It might seem at first glance that only the dashboard designer will benefit from co-authoring but in fact, it can change the whole software development life cycle. For example, the customer's release flow is improved when there are several developers who need to make changes in different widgets on the same dashboard. With the new release, developers will not have to wait their turn to work on elements and will be able to get work done faster. This will change the speed of fixing or delivering new insights to the end users.

Agile recruitment is a flexible approach strongly inspired by agile software development. In particular, it helps to manage expectations through the principle of "inspect and adapt".

In essence, agile recruitment is the use of iterative and flexible methods in the recruitment process. This approach is strongly inspired by the principles of agile software development such as Scrum and Kanban, and some elements may already be used by your HR department.

The main goal of Agile Recruitment is to evolve an otherwise static approach towards more transparency and greater efficiency and effectiveness. Ultimately, the aim is to reach the candidates you are looking for quickly and identify them as suitable. The goals derived from this are:

  • Faster talent acquisition
  • Improved quality of recruitment
  • Adaptability of the recruitment process to market dynamics
  • Increased employee engagement and employee retention
  • A broader pool of candidates
  • Even impact on employer branding

Elements of agile recruitment

There is not just one way to move forward. Various elements can or must be combined to achieve the desired result. However, the following have proven themselves and are "good practice", so to speak:

  • Short iterations: Each iteration can focus on a specific phase
  • Continuous communication with applicants and stakeholders: regular updates, feedback and timely responses
  • Cross-functional collaboration: evaluation of candidates from different perspectives (HR, future colleagues, future manager, etc.)
  • Candidate-centred approach: Does the recruitment process actually meet the needs and expectations of the candidates? Continuous feedback and continuous improvement is the only way to find out; to do this, you should seek regular feedback from candidates and hiring teams
  • Relationship building: doing this with potential candidates not only helps with feedback but can also provide a pool of candidates for the future
  • Emphasise diversity and inclusion (this should be a given): Different perspectives and backgrounds lead to a more diverse candidate pool and innovation

Phases of agile recruitment

Every organisation has its own unique recruitment process and requirements that influence the specific phases of agile recruitment. Agile recruitment depends less on the individual phases and more on how well the collaboration between those involved works and develops. Some relevant phases can be:

  • Planning and sourcing: define hiring needs and requirements including required skills, qualifications and cultural compatibility; build a talent pipeline and proactively source candidates through various channels
  • Candidate screening: Conduct initial assessments of candidates based on predefined criteria
  • Agile interviewing: Conduct collaborative interviews with multiple stakeholders; use behavioural and situational questions to assess candidates' problem-solving skills and adaptability
  • Feedback and iteration: Get immediate feedback from candidates and stakeholders after each interview and adjust the selection process
  • Evaluation and decision making: Assess candidates based on their technical skills and alignment with the company's values
  • Offer and onboarding: Make successful candidates an offer quickly and transparently; facilitate a seamless onboarding process - this is surprisingly often missed
  • Continuous improvement (Inspect and Adapt): Continually evaluate and refine the agile recruitment process based on candidate feedback, metrics and overall team performance identify areas for improvement and make necessary adjustments

The role of expectations in the recruitment process

Candidates and hiring companies have expectations. It is helpful to formulate these clearly. On the applicant side, these can be expectations such as:

  • Job fit: Does the position really match the applicant's skills?
  • Corporate culture: Consistency with your own principles, a supportive and inclusive workplace
  • Communication and transparency: A lack of feedback or long periods of silence can lead to frustration and disappointment
  • Candidate experience: You value professionalism, fairness and respect from recruiters and hiring managers
  • Pay and benefits: Aligning pay with market standards and offering attractive benefits

If applicants do not work out and their expectations are not met, this is sometimes due to a much too filtered presentation of the job:

  • The skills they are looking for are not really required ... or not yet
  • Top-down attitude in management contradicts the promised agile environment
  • Constant firefighting takes place instead of developing something new
  • PowerPoint and Excel battles are demotivating; some organisations tend to create internal documents that are too extensive
  • Culture descriptions that deviate from the actual culture
  • Low agile maturity: coming from a highly agile workplace can sometimes lead to a bit of waterfall culture shock if you are unprepared

The challenges of an agile recruitment process

Time constraints and a fast and iterative recruitment process can lead to rushed decisions, possibly resulting in inappropriate hires. The following issues should be kept in mind:

  • Alignment with stakeholders: Differing interests are a challenge in aligning expectations and decision criteria
  • Data management/analytics: Accurate and relevant data for analysis can be a challenge
  • Change management: Agile recruitment requires a change in mindset and processes for recruiters and the organisation as a whole
  • Adaptability of roles: For some roles, it can be a challenge to apply the same approach consistently
  • Over-emphasis on specific skills: Heavy focus on technical skills and immediate role requirements can lead to suitable candidates being overlooked
  • Resistance to change: Employees and stakeholders accustomed to traditional recruitment methods may resist the introduction of agile practices
  • Feedback and continuous improvement: Organisations may struggle to gather and respond to feedback from candidates and internal stakeholders

Recruitment phases should not be waterfall sequential but should be able to overlap so that it is possible to jump back and forth between them if necessary. On the whole, agile recruitment is about collaborating in a structured way with all those involved and continuously improving the process.

In light of the recent polyfill.io incident, we sat down to speak with JavaScript expert Simon Wijckmans, on how the incident occurred and what can be done about it. Simon is the founder of c/side, a cybersecurity company with tools for monitoring, optimizing, and securing vulnerable browser-side third-party scripts, and has previously worked at Cloudflare.

devmio: Could you talk a little about your career journey and what led you to founding c/side?

Simon Wijckmans: I was originally born in Belgium. While growing up, I realized that schools did not provide me enough access to the topics I cared most about (most notably, technology, engineering, and business). There was a loophole, which I went ahead and used at 16: I built a team to teach me the required knowledge for my degree and built my own model for important subjects—like computer science, economics, and law—not included in the curriculum. This path gave me the degree I sought, but it also programmed me to pursue a life of continuous learning.

I then joined Microsoft as a contractor at 17, later full-time. Afterwards, I joined Cloudflare and became a Product Manager. About three years later, I joined Vercel as a Senior Product Manager working on their enterprise products. Finally, I joined Hydra (a Y Combinator startup) as Product Lead before now running my own entrepreneurial journey with c/side.

I tried to shape my journey so that I could learn from different-phased companies (startups and enterprises) and in various roles (both internal and customer-facing). Starting my own business was always the plan. I had many side projects over the years, but I wanted to gather a ton of experience along the way so that I felt comfortable and familiar with hard situations. A decade in, the right opportunity presented itself and c/side was founded.

devmio: Given your expertise in JavaScript security and your experience at Cloudflare, how do you assess the severity of the recent Polyfill attack?

Simon Wijckmans: Unfortunately, this high-profile attack is the perfect example of what we’re trying to solve. First and foremost, it shows that third-party scripts on websites can have a profound impact and these sites would never know. Websites are sailing completely blind on the client side, and it’s incredibly dangerous to expose your users to such attacks.

Secondly, it showed that scripts are sticky. About half a million websites (and possibly more) still had the Polyfill script on their site, where realistically the script is there only to address old browsers used by fewer than 1% of users (and decreasing every day). But old scripts that aren’t removed become a permanent risk factor, as this major incident showed. Not only mom-and-pop shops with sites created by small agencies had Polyfill still on it, but even Hulu, The Guardian, Intuit, and other very large sites still had it embedded.

Thirdly, and this is the scary part, this attack had a relatively low impact. That won’t always be the case. Here, a visitor being redirected to a betting site or adult content site is a very simple and visible attack, but the bad actors could have easily injected a script with far less visible and dangerous intentions. A malicious script could be designed to activate only within a PWA mobile app, or a crypto-mining script might trigger only when the browser detects an IoT device with limited debugging capabilities. As much as this was already scary and caught a lot of headlines, the internet by and large got lucky. This could have been way more severe, and nothing is stopping a bad actor from trying the same type of attack again. We must continuously monitor our entire supply chain, including the client, without just relying on sampling.

“We must continuously monitor our entire supply chain, including the client, without just relying on sampling.”

devmio: What specific vulnerabilities in the JavaScript ecosystem were exploited by the attackers in the Polyfill incident?

Simon Wijckmans: Polyfill was originally an open-source project allowing websites to use modern JavaScript features in older browsers like Internet Explorer. Eventually, this largely became unnecessary as almost all users have moved to more modern browsers. The open-source code itself was actually fine, but the domain to reference the script, Polyfill[.]io was purchased by a Chinese company called Funnull. Funnull then altered the code served through that domain, which redirected a percentage of users to adult and betting websites based on their User-Agent.

The problem is that thread feed vendors are slow to catch up. For days after the news broke, many didn’t update their registries and still flagged this (and other domains used in the attack) as safe.

devmio: Beyond redirection attacks, what other types of malicious activities can be achieved by compromising JavaScript code?

Simon Wijckmans: A simple redirect like this is one of the most simple outcomes. But indeed, much more is possible. People are likely most afraid of stealing personal information, like login credentials or even credit card details as was seen in the infamous 2018 British Airways attack. (We actually now own the domain that was used there and turned it into an educational website that outlines the entire attack for anyone who wants a deep dive on that attack: baways.com.)

But technically, anything that is possible in the browser can be achieved through this attack vector. In the BrowseAloud supply chain attack, the malicious actors injected the CoinHive cryptocurrency mining script into this hijacked script.

The sky truly is the limit in these types of attacks, and more is possible every day as we make browsers more capable and complex.

devmio: How can developers effectively identify and mitigate risks associated with third-party JavaScript libraries and CDNs?

Simon Wijckmans: For starters, thread feeds are definitely not the way. They fundamentally “don’t know what they don’t know” and are playing catch-up each time news of an attack breaks.

We believe the only way is to check the full payload of the code before it’s loaded in the browser of the user every time it gets loaded. Only then will you be able to see exactly what’s going on. We have engineered c/side to do this in a proxy. Only when it’s safe are the scripts served to the users. Through some clever tricks and engineering, it doesn’t even cause latency.

You can do this in CDNs too, but that isn’t 100% foolproof either. You only need to go as far back as 2021 to the cdnjs vulnerability to see how that can go wrong too. Over 12% of all websites on the internet inject at least one script through cdnjs, illustrating the severity of the problems should it go wrong.

We should not trust that what we get from a third-party is de facto safe. We should verify, as everyone can have a bad day and bad actors are on the lookout for every opportunity they can get.

devmio: What role does developer education and training play in preventing these types of attacks? How can organizations build a robust security culture to address the evolving JavaScript threat landscape?

Simon Wijckmans: Attacks like these keep happening and the developer community is increasingly aware of this issue, but security leadership often isn’t. However, the developers are usually not the ones that want to add these scripts. They are asked by other teams like Marketing, Legal, HR, Data, etc. to add scripts. Lacking governance to implement a script is a major issue as for many companies adding a third-party script is a gray area in their processes. We hope that using a service to track client-side behaviors will soon become a basic no-brainer to protect users. With a tool like what we built, you can knock this off the list in minutes and no longer need to worry about it.

“Attacks like these keep happening and the developer community is increasingly aware of this issue, but security leadership often isn’t.”

devmio: With the increasing complexity of web applications and the growing reliance on JavaScript, what do you see as the biggest challenges for developers in securing their code?

Simon Wijckmans: Browsers are rapidly evolving. Staying on top of all the security risks of an evolving platform that at the same time is actively being adopted for more things is very, very hard. The risks of client-side attacks are evolving at a fast pace.

One interesting development is that these client-side attacks are coming to mobile applications. With the creation of Progressive Web Apps (PWAs), suddenly websites are turned into web apps—and therefore the entire client-side attack surface of a browser now applies to a mobile app.

The supply chain in general is known to be a big problem in shipping secure code, but many in leadership are not yet aware of the unmonitored risks client-side. The current state of supply chain security is a bit like a leak in a tire: it doesn’t matter if you close down five holes, if there are two big holes left, you’ll still end up with a flat tire. Attackers will shift their attack to an area that is least secure.

devmio: With the rise of AI, I suspect these attacks will grow more sophisticated. How do you see AI impacting JS or other related security attacks?

Simon Wijckmans: AI has made it easier to get complicated actions done better and faster. This is true for good code, but also for malicious code. I’d suspect the amount of attacks will only go up, especially in the short-to-mid-term as companies are not paying enough attention to this attack vector. For client-side security, because of how many detection mechanisms work, AI is a very severe threat. Slightly altering the code so that it doesn’t match a bad hash while still performing the same actions has become a lot easier.

“I’d suspect the amount of attacks will only go up, especially in the short-to-mid-term as companies are not paying enough attention to this attack vector.”

devmio: Given the increasing frequency and sophistication of cyberattacks, developers find themselves at the forefront of defending against digital threats. What practical steps can developers implement in both their professional and personal lives to mitigate risks and protect sensitive information? What specific security measures can be implemented at the application level to protect against supply chain attacks?

Simon Wijckmans: One needs to look at the entire web supply chain, and there are several things to secure outside of third-party scripts.

Firstly, use a tool to monitor your client-side fetched scripts. The fetching itself, the changes of sources, the change in exfiltration paths, the changes in code…that all needs to be accounted for. If you can self-host the scripts, that is one less potential issue. If it uses the same infrastructure as the rest of your site, you have more control.

If you must add a third-party domain, ensure proper DNS security practices are in place. Then make sure they are securing the TCP/IP connection as well as the now standard SSL/TLS handshake. RPKI monitoring and Certificate Issuing monitoring can save the day here. To prevent Cross-Site Scripting (XSS) attacks, properly encode any user-generated content in server responses.

If inline scripts are necessary, use a nonce (a one-time token) in the script tag. CSS expressions can execute code in the browser context, potentially leading to security vulnerabilities, so make sure to cover that. If inline scripts are not necessary, add a CSP header to the site to prevent inline scripts.

Also, set suitable cache directives to avoid storing sensitive information in the cache—especially sensitive pages or one-page applications where scripts are often running site-wide.

If applicable, service workers can intercept network requests, manage caching, and enable offline functionality. Ensure that resources loaded from cross-origin sources are not able to interfere with the rendering of your site and implement anti-CSRF tokens in forms to guard against Cross-Site Request Forgery attacks.

Desktop Tablet Mobile
Desktop Tablet Mobile