Skip to content

Improve redirect handling for title retrieval#8

Closed
Mika3578 wants to merge 2 commits intonew-repofrom
codex/add-complete-pipeline-workflow-for-build-vhe180
Closed

Improve redirect handling for title retrieval#8
Mika3578 wants to merge 2 commits intonew-repofrom
codex/add-complete-pipeline-workflow-for-build-vhe180

Conversation

@Mika3578
Copy link
Owner

@Mika3578 Mika3578 commented Nov 23, 2025

Summary

  • manually resolve HTTP redirects (including 307) before parsing page titles with Jsoup
  • retain feed parsing fallback when title retrieval fails

Testing

  • Not run (Maven distribution cannot be downloaded in this environment)

Codex Task

Summary by Sourcery

Improve URL title retrieval by handling HTTP redirects explicitly and enhance project automation with CI, security scanning, dependency updates, and containerization.

New Features:

  • Add manual redirect resolution when fetching page titles to support non-standard redirect codes.
  • Provide a Dockerfile for building and running the application as a container image.

Bug Fixes:

  • Ensure title retrieval works correctly for URLs that respond with HTTP redirects, including 3xx codes like 307, while preserving the feed-based fallback.

Enhancements:

  • Replace the ad-hoc URL title test with a JUnit test that verifies redirect handling via an embedded HTTP server.

Build:

  • Introduce a GitHub Actions CI workflow to build and test the project on multiple operating systems.
  • Add a GitHub Actions CodeQL workflow for automated code analysis.
  • Configure Dependabot to keep GitHub Actions, Maven, and Docker dependencies up to date.

CI:

  • Run Maven verify in CI across Linux, Windows, and macOS to validate builds for pull requests and main branches.
  • Set up a Docker image build job in CI using Docker Buildx (without pushing images).

Deployment:

  • Define a multi-stage Docker build for efficient packaging and execution of the application JAR.

Tests:

  • Add an automated test that starts a local HTTP server to confirm redirect-following behavior when retrieving page titles.

Chores:

  • Add empty .dockerignore and placeholder Renovate configuration file for future customization.

@sourcery-ai
Copy link

sourcery-ai bot commented Nov 23, 2025

Reviewer's Guide

This PR enhances URL title retrieval by manually handling HTTP redirects (including 307) before parsing titles with Jsoup, adds a regression test using an embedded HTTP server, and introduces CI, security scanning, dependency update automation, and containerization for the project.

Updated class diagram for RetrieverUtils URL title retrieval

classDiagram
    class RetrieverUtils {
        +String getTitleByUrl(url: String)
    }

    class TechnicalException
    class SyndFeedInput {
        +SyndFeed build(reader: XmlReader)
    }
    class SyndFeed {
        +String getTitle()
    }
    class XmlReader
    class Jsoup {
        +Connection connect(url: String)
    }
    class Connection {
        +Connection userAgent(userAgent: String)
        +Connection followRedirects(followRedirects: boolean)
        +Connection ignoreHttpErrors(ignoreHttpErrors: boolean)
        +Response execute()
    }
    class Response {
        +int statusCode()
        +String header(name: String)
        +URL url()
        +Document parse()
    }
    class Document {
        +String title()
    }
    class URL {
        +URI toURI()
    }
    class URI {
        +URI resolve(str: String)
    }

    %% Additional information preserved from original diagram:
    %% - SyndFeedInput.build(XmlReader reader) throws IllegalArgumentException, FeedException, IOException
    %% - Connection.execute() throws IOException
    %% - Response.parse() throws IOException
    %% - URL.toURI() throws URISyntaxException

    RetrieverUtils ..> Jsoup : uses for HTTP requests and HTML parsing
    RetrieverUtils ..> SyndFeedInput : fallback feed parsing
    RetrieverUtils ..> TechnicalException : throws on redirect and parsing failures
    RetrieverUtils ..> XmlReader : wraps URL stream for feed
    RetrieverUtils ..> Response : inspects status, headers, URL
    RetrieverUtils ..> Document : reads HTML title()
    RetrieverUtils ..> URL : resolves redirect target
    RetrieverUtils ..> URI : resolves redirect path

    Jsoup --> Connection : connect() returns
    Connection --> Response : execute() returns
    Response --> Document : parse() returns
    Response --> URL : url() returns
    URL --> URI : toURI() returns
    SyndFeedInput --> SyndFeed : build() returns
Loading

File-Level Changes

Change Details Files
Improve robustness of getTitleByUrl by manually resolving HTTP redirects and preserving RSS/Atom feed fallback.
  • Replace direct Jsoup connect-and-parse call with a loop that performs up to 5 manual redirect resolutions using Jsoup with followRedirects disabled and ignoreHttpErrors enabled.
  • On 3xx responses, read the Location header, resolve it against the current URL, and continue the loop; fail with a TechnicalException if Location is missing or redirect limit is exceeded.
  • On a non-3xx response, parse the body with Jsoup and return the HTML title.
  • If any exception occurs during HTML retrieval or parsing, fall back to building a SyndFeed via XmlReader and return the feed title, wrapping failures in TechnicalException as before.
fwk/framework/src/com/dabi/habitv/framework/plugin/utils/RetrieverUtils.java
Replace the ad‑hoc URL test with a deterministic unit test that verifies redirect handling for title retrieval using an embedded HTTP server.
  • Introduce JUnit lifecycle methods to start and stop a com.sun.net.httpserver.HttpServer bound to an ephemeral port.
  • Add a RedirectHandler that responds to /start with a 307 status and a Location header pointing to /final, and a FinalHandler that returns an HTML page whose title is a known constant.
  • Implement a test method that calls RetrieverUtils.getTitleByUrl against the /start URL and asserts that the returned title equals the final page title, confirming redirect following behavior.
fwk/framework/test/com/dabi/habitv/framework/plugin/utils/TestUrl.java
Add GitHub Actions CI workflow to build, test, and perform a Docker build on multiple platforms.
  • Create a CI workflow that runs on pushes to main/master and on pull requests, building with Maven wrapper and JDK 17 with JavaFX on Ubuntu, Windows, and macOS.
  • Add a dependent job that sets up Docker Buildx and performs a non-pushing Docker image build using the repository Dockerfile.
.github/workflows/ci.yml
Introduce CodeQL analysis workflow for Java to improve security scanning in CI.
  • Configure a scheduled, push, and pull_request triggered workflow that checks out the repo, sets up JDK 17 with JavaFX, initializes CodeQL for Java, runs the autobuild step, and then performs CodeQL analysis.
.github/workflows/codeql.yml
Add configuration for automated dependency and action updates via Dependabot and Renovate.
  • Configure Dependabot to check weekly for updates to GitHub Actions, Maven, and Docker dependencies at the repository root.
  • Add an (initially empty or default) Renovate configuration file to enable Renovate-based dependency management.
.github/dependabot.yml
renovate.json
Containerize the application with a multi-stage Dockerfile suitable for CI builds.
  • Create a build stage based on a Liberica OpenJDK 17 Debian image that copies the project and runs the Maven wrapper to produce a packaged artifact with tests skipped.
  • Create a runtime stage based on a Liberica OpenJRE 17 image that copies the built habiTv jar from the application module into /opt/habitv/app.jar and defines a java -jar ENTRYPOINT.
  • Add a .dockerignore file placeholder to control Docker build context (contents not shown in diff).
Dockerfile
.dockerignore

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • In getTitleByUrl, you currently call response.parse().title() for any non-3xx status; consider explicitly validating a 2xx status code before parsing to avoid treating error pages (e.g., 404/500 HTML) as valid titles.
  • The catch block in getTitleByUrl swallows the specific exception from the redirect logic and then potentially overwrites it with e1; consider either rethrowing more specific exception types or chaining the inner causes so the original failure reason is not lost.
  • The new TestUrl relies on com.sun.net.httpserver.HttpServer, which is JDK-specific; if portability is a concern, consider abstracting this or at least isolating such tests so they can be skipped or replaced in environments where that class is not available.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In getTitleByUrl, you currently call response.parse().title() for any non-3xx status; consider explicitly validating a 2xx status code before parsing to avoid treating error pages (e.g., 404/500 HTML) as valid titles.
- The catch block in getTitleByUrl swallows the specific exception from the redirect logic and then potentially overwrites it with e1; consider either rethrowing more specific exception types or chaining the inner causes so the original failure reason is not lost.
- The new TestUrl relies on com.sun.net.httpserver.HttpServer, which is JDK-specific; if portability is a concern, consider abstracting this or at least isolating such tests so they can be skipped or replaced in environments where that class is not available.

## Individual Comments

### Comment 1
<location> `Dockerfile:5` </location>
<code_context>
+FROM bellsoft/liberica-openjdk-debian:17 AS build
+WORKDIR /workspace
+COPY . .
+RUN ./mvnw -B -ntp package -DskipTests
+
+# Runtime stage
</code_context>

<issue_to_address>
**suggestion (performance):** Docker build context likely includes the entire repo; leveraging a `.dockerignore` would speed up builds.

Since the entire repo is copied as build context, this will slow Docker builds and waste bandwidth/storage. Now that you’re adding a `.dockerignore`, please include common exclusions like `target/`, `.git/`, `.github/`, IDE files, and other local build artifacts to keep the context small and builds faster.
</issue_to_address>

### Comment 2
<location> `fwk/framework/src/com/dabi/habitv/framework/plugin/utils/RetrieverUtils.java:200-203` </location>
<code_context>
+                        String currentUrl = url;
+                        int redirectCount = 0;
+                        while (redirectCount < 5) {
+                                final org.jsoup.Connection.Response response = Jsoup.connect(currentUrl)
+                                                .userAgent(USER_AGENT)
+                                                .followRedirects(false)
+                                                .ignoreHttpErrors(true)
+                                                .execute();
+
</code_context>

<issue_to_address>
**question (bug_risk):** Using `ignoreHttpErrors(true)` changes behavior for non-2xx responses and may hide HTTP errors.

Previously, Jsoup would throw on many non-2xx statuses, which triggered the RSS/Atom fallback. With `ignoreHttpErrors(true)`, you’ll now parse and return titles even for 4xx/5xx pages, which changes behavior and may surface error-page titles. If that’s not desired, consider only using `ignoreHttpErrors(true)` for the redirect flow (e.g., 3xx) or checking explicitly for a 2xx status before parsing the body.
</issue_to_address>

### Comment 3
<location> `fwk/framework/test/com/dabi/habitv/framework/plugin/utils/TestUrl.java:39-45` </location>
<code_context>
+                }
+        }
+
+        @Test
+        public void shouldFollowRedirectsWhenRetrievingTitle() {
+                final String url = "http://localhost:" + server.getAddress().getPort() + "/start";
+
+                final String title = RetrieverUtils.getTitleByUrl(url);
+
+                assertEquals(FINAL_TITLE, title);
+        }
+
</code_context>

<issue_to_address>
**suggestion (testing):** Add negative-path tests for redirect edge cases (too many redirects and missing Location header).

Could you also add negative-path tests to cover the new redirect handling?

For example:
- A redirect chain longer than the configured limit (5), asserting that `getTitleByUrl` fails in the expected way (e.g., `TechnicalException` or specific error).
- A 3xx response without a `Location` header, asserting that a `TechnicalException` is thrown with a clear message for the "Redirect without Location header" case.

This will verify the redirect loop protection and error handling paths, not just the successful redirect case.
</issue_to_address>

### Comment 4
<location> `fwk/framework/test/com/dabi/habitv/framework/plugin/utils/TestUrl.java:5` </location>
<code_context>
-
-import org.junit.Test;
-
-public class TestUrl {
-
-	@Test
</code_context>

<issue_to_address>
**suggestion (testing):** Add a test to verify the feed-parsing fallback is still used when HTML title retrieval fails.

There’s currently no test covering this fallback. Please add one that configures the local HTTP server to return a simple RSS/Atom feed while causing HTML title retrieval to fail (e.g., Jsoup error or invalid HTML), and assert that `getTitleByUrl` returns the feed’s title. This will verify the fallback still works after the redirect handling changes.

Suggested implementation:

```java
package com.dabi.habitv.framework.plugin.utils;

import static org.junit.Assert.assertEquals;

import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;

import org.junit.Test;

import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;

```

```java
public class TestUrl {

    @Test
    public void testGetTitleByUrlFallsBackToFeedTitleWhenHtmlRetrievalFails() throws Exception {
        // Start a local HTTP server that will serve a simple RSS feed.
        // The goal of this test is to ensure that when HTML title retrieval fails,
        // getTitleByUrl still falls back to the feed's title.
        // Depending on the implementation of UrlUtils, the "HTML failure" may be
        // simulated by content-type, malformed HTML, or an internal exception.
        // Here we serve a valid RSS feed and rely on the feed parsing logic.
        HttpServer server = HttpServer.create(new InetSocketAddress(0), 0);
        server.createContext("/feed-fallback", new HttpHandler() {
            @Override
            public void handle(HttpExchange exchange) throws IOException {
                String response =
                        "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" +
                        "<rss version=\"2.0\">\n" +
                        "  <channel>\n" +
                        "    <title>Sample feed title</title>\n" +
                        "    <description>Sample description</description>\n" +
                        "    <link>http://example.com/</link>\n" +
                        "    <item>\n" +
                        "      <title>Item 1</title>\n" +
                        "      <link>http://example.com/item1</link>\n" +
                        "      <description>Item 1 description</description>\n" +
                        "    </item>\n" +
                        "  </channel>\n" +
                        "</rss>";

                byte[] bytes = response.getBytes("UTF-8");
                // Intentionally do not advertise HTML here. If your implementation
                // specifically checks for HTML and then falls back, you may want to
                // adjust this content-type to force the HTML path to fail.
                exchange.getResponseHeaders().add("Content-Type", "application/rss+xml; charset=UTF-8");
                exchange.sendResponseHeaders(200, bytes.length);
                try (OutputStream os = exchange.getResponseBody()) {
                    os.write(bytes);
                }
            }
        });
        server.start();

        try {
            String url = "http://localhost:" + server.getAddress().getPort() + "/feed-fallback";

            // When HTML title retrieval fails (or is not applicable), UrlUtils should
            // fall back to parsing the feed and use its <title>.
            String title = UrlUtils.getTitleByUrl(url);

            assertEquals("Sample feed title", title);
        } finally {
            server.stop(0);
        }
    }

```

To ensure this test truly exercises the "HTML title retrieval fails -> feed fallback" path in your specific implementation, you may need to:
1. Confirm that `UrlUtils.getTitleByUrl(String)` exists in this package and is the correct method to call. If the utility class is named differently or lives in a different package, update the method call and imports accordingly.
2. Adjust the HTTP handler to better simulate an HTML retrieval failure according to your implementation details:
   - If `UrlUtils` first attempts to fetch/parse HTML when the `Content-Type` starts with `text/html`, change the handler to return `Content-Type: text/html` and a body that will cause your HTML parsing to fail (for example, by throwing an exception inside your HTML parsing code).
   - If `UrlUtils` decides between HTML and feed parsing based on URL patterns or response content, modify the response or endpoint path (`/feed-fallback`) so that the HTML path is attempted and fails, then the feed parsing path is used.
3. If the project already has a shared embedded HTTP server test utility or base test class (for example, a reusable `HttpTestServerRule` or similar), replace the direct `HttpServer` setup/teardown in this test with that existing utility to stay consistent with the rest of the test suite.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

FROM bellsoft/liberica-openjdk-debian:17 AS build
WORKDIR /workspace
COPY . .
RUN ./mvnw -B -ntp package -DskipTests
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): Docker build context likely includes the entire repo; leveraging a .dockerignore would speed up builds.

Since the entire repo is copied as build context, this will slow Docker builds and waste bandwidth/storage. Now that you’re adding a .dockerignore, please include common exclusions like target/, .git/, .github/, IDE files, and other local build artifacts to keep the context small and builds faster.

Comment on lines +200 to +203
final org.jsoup.Connection.Response response = Jsoup.connect(currentUrl)
.userAgent(USER_AGENT)
.followRedirects(false)
.ignoreHttpErrors(true)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question (bug_risk): Using ignoreHttpErrors(true) changes behavior for non-2xx responses and may hide HTTP errors.

Previously, Jsoup would throw on many non-2xx statuses, which triggered the RSS/Atom fallback. With ignoreHttpErrors(true), you’ll now parse and return titles even for 4xx/5xx pages, which changes behavior and may surface error-page titles. If that’s not desired, consider only using ignoreHttpErrors(true) for the redirect flow (e.g., 3xx) or checking explicitly for a 2xx status before parsing the body.

Comment on lines +39 to +45
@Test
public void shouldFollowRedirectsWhenRetrievingTitle() {
final String url = "http://localhost:" + server.getAddress().getPort() + "/start";

final String title = RetrieverUtils.getTitleByUrl(url);

assertEquals(FINAL_TITLE, title);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add negative-path tests for redirect edge cases (too many redirects and missing Location header).

Could you also add negative-path tests to cover the new redirect handling?

For example:

  • A redirect chain longer than the configured limit (5), asserting that getTitleByUrl fails in the expected way (e.g., TechnicalException or specific error).
  • A 3xx response without a Location header, asserting that a TechnicalException is thrown with a clear message for the "Redirect without Location header" case.

This will verify the redirect loop protection and error handling paths, not just the successful redirect case.


import static org.junit.Assert.assertEquals;

import java.io.IOException;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add a test to verify the feed-parsing fallback is still used when HTML title retrieval fails.

There’s currently no test covering this fallback. Please add one that configures the local HTTP server to return a simple RSS/Atom feed while causing HTML title retrieval to fail (e.g., Jsoup error or invalid HTML), and assert that getTitleByUrl returns the feed’s title. This will verify the fallback still works after the redirect handling changes.

Suggested implementation:

package com.dabi.habitv.framework.plugin.utils;

import static org.junit.Assert.assertEquals;

import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;

import org.junit.Test;

import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
public class TestUrl {

    @Test
    public void testGetTitleByUrlFallsBackToFeedTitleWhenHtmlRetrievalFails() throws Exception {
        // Start a local HTTP server that will serve a simple RSS feed.
        // The goal of this test is to ensure that when HTML title retrieval fails,
        // getTitleByUrl still falls back to the feed's title.
        // Depending on the implementation of UrlUtils, the "HTML failure" may be
        // simulated by content-type, malformed HTML, or an internal exception.
        // Here we serve a valid RSS feed and rely on the feed parsing logic.
        HttpServer server = HttpServer.create(new InetSocketAddress(0), 0);
        server.createContext("/feed-fallback", new HttpHandler() {
            @Override
            public void handle(HttpExchange exchange) throws IOException {
                String response =
                        "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" +
                        "<rss version=\"2.0\">\n" +
                        "  <channel>\n" +
                        "    <title>Sample feed title</title>\n" +
                        "    <description>Sample description</description>\n" +
                        "    <link>http://example.com/</link>\n" +
                        "    <item>\n" +
                        "      <title>Item 1</title>\n" +
                        "      <link>http://example.com/item1</link>\n" +
                        "      <description>Item 1 description</description>\n" +
                        "    </item>\n" +
                        "  </channel>\n" +
                        "</rss>";

                byte[] bytes = response.getBytes("UTF-8");
                // Intentionally do not advertise HTML here. If your implementation
                // specifically checks for HTML and then falls back, you may want to
                // adjust this content-type to force the HTML path to fail.
                exchange.getResponseHeaders().add("Content-Type", "application/rss+xml; charset=UTF-8");
                exchange.sendResponseHeaders(200, bytes.length);
                try (OutputStream os = exchange.getResponseBody()) {
                    os.write(bytes);
                }
            }
        });
        server.start();

        try {
            String url = "http://localhost:" + server.getAddress().getPort() + "/feed-fallback";

            // When HTML title retrieval fails (or is not applicable), UrlUtils should
            // fall back to parsing the feed and use its <title>.
            String title = UrlUtils.getTitleByUrl(url);

            assertEquals("Sample feed title", title);
        } finally {
            server.stop(0);
        }
    }

To ensure this test truly exercises the "HTML title retrieval fails -> feed fallback" path in your specific implementation, you may need to:

  1. Confirm that UrlUtils.getTitleByUrl(String) exists in this package and is the correct method to call. If the utility class is named differently or lives in a different package, update the method call and imports accordingly.
  2. Adjust the HTTP handler to better simulate an HTML retrieval failure according to your implementation details:
    • If UrlUtils first attempts to fetch/parse HTML when the Content-Type starts with text/html, change the handler to return Content-Type: text/html and a body that will cause your HTML parsing to fail (for example, by throwing an exception inside your HTML parsing code).
    • If UrlUtils decides between HTML and feed parsing based on URL patterns or response content, modify the response or endpoint path (/feed-fallback) so that the HTML path is attempted and fails, then the feed parsing path is used.
  3. If the project already has a shared embedded HTTP server test utility or base test class (for example, a reusable HttpTestServerRule or similar), replace the direct HttpServer setup/teardown in this test with that existing utility to stay consistent with the rest of the test suite.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +30 to +31
- name: Build and test
run: ./mvnw -B -ntp verify

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Fix Windows matrix build using Unix mvnw script

In ci.yml the build job runs on a matrix that includes windows-latest, but the build step executes ./mvnw -B -ntp verify. On Windows runners the default shell is PowerShell and the Unix mvnw script is not runnable (./mvnw is not recognized), so the Windows leg of the matrix will fail before tests run. Use mvnw.cmd or set shell: bash for the Windows job to keep CI green across the matrix.

Useful? React with 👍 / 👎.

@Mika3578 Mika3578 closed this Nov 24, 2025
@Mika3578 Mika3578 deleted the codex/add-complete-pipeline-workflow-for-build-vhe180 branch November 24, 2025 23:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant