feat: Add HTTP connection pooling for Lighthouse API calls#75
feat: Add HTTP connection pooling for Lighthouse API calls#75Patrick-Ehimen merged 2 commits intomainfrom
Conversation
Wire the existing ConnectionPool into LighthouseAISDK so that direct HTTP calls (e.g. the uploadViDirectAPI fallback) reuse pooled connections with keep-alive instead of creating new ones per request. - Add optional `pool` config to LighthouseConfig (default enabled, set false to disable) - Create ConnectionPool in SDK constructor with configurable max connections, timeouts, and keep-alive - Add executeHttpRequest helper that routes through pool or falls back to direct axios - Expose pool metrics via getConnectionPoolStats() - Add env var support (LIGHTHOUSE_POOL_MAX_CONNECTIONS, etc.) in MCP server config - Add 32 new tests covering pool lifecycle, queuing, and SDK integration Closes #54
Reviewer's GuideIntegrates a configurable HTTP ConnectionPool into LighthouseAISDK and the MCP server to reuse keep-alive connections for direct Lighthouse API calls, exposes pool metrics, and wires configuration through SDK and server config with tests covering lifecycle, configuration, and metrics. Sequence diagram for HTTP request execution with ConnectionPool in LighthouseAISDKsequenceDiagram
actor Client
participant LighthouseService
participant LighthouseAISDK
participant ConnectionPool
participant AxiosInstance
participant AxiosStatic
participant LighthouseAPI
Client->>LighthouseService: call uploadViDirectAPI
LighthouseService->>LighthouseAISDK: uploadViDirectAPI(fileBuffer, fileName, apiKey)
activate LighthouseAISDK
LighthouseAISDK->>LighthouseAISDK: executeHttpRequest(config)
alt connectionPool enabled
LighthouseAISDK->>ConnectionPool: acquire()
ConnectionPool-->>LighthouseAISDK: AxiosInstance
LighthouseAISDK->>AxiosInstance: request(config)
AxiosInstance-->>LighthouseAISDK: AxiosResponse
LighthouseAISDK->>ConnectionPool: release(AxiosInstance)
else connectionPool disabled
LighthouseAISDK->>AxiosStatic: request(config)
AxiosStatic-->>LighthouseAISDK: AxiosResponse
end
LighthouseAISDK-->>LighthouseService: AxiosResponse
deactivate LighthouseAISDK
LighthouseService-->>Client: upload result
Class diagram for LighthouseAISDK, LighthouseService, and ConnectionPool configurationclassDiagram
class LighthouseConfig {
string apiKey
string? baseUrl
number? timeout
number? maxRetries
boolean? debug
ConnectionPoolConfig or false pool
}
class ConnectionPoolConfig {
number maxConnections
number acquireTimeout
number idleTimeout
number requestTimeout
boolean keepAlive
number maxSockets
}
class ConnectionPool {
+ConnectionPool(config ConnectionPoolConfig)
+acquire() Promise~AxiosInstance~
+release(instance AxiosInstance) void
+getStats() ConnectionPoolStats
+destroy() void
+on(event string, handler Function) void
}
class ConnectionPoolStats {
number totalConnections
number activeConnections
number idleConnections
number queuedRequests
number totalRequests
number averageWaitTime
}
class LighthouseAISDK {
-LighthouseAutoAuth auth
-EncryptionManager encryption
-RateLimiter rateLimiter
-CircuitBreaker circuitBreaker
-ConnectionPool or null connectionPool
-LighthouseConfig config
+LighthouseAISDK(config LighthouseConfig)
-executeHttpRequest(config AxiosRequestConfig) Promise~AxiosResponse~
+uploadViDirectAPI(fileBuffer Buffer, fileName string, apiKey string) Promise~AxiosResponse~
+getConnectionPoolStats() ConnectionPoolStats or null
+getCircuitBreakerStatus() any
+getErrorMetrics() any
+getActiveOperations() any
+destroy() void
}
class ConnectionPoolServerConfig {
number maxConnections
number idleTimeoutMs
number requestTimeoutMs
boolean keepAlive
}
class ServerConfig {
string name
string version
AuthConfig authentication
PerformanceConfig performance
MultiTenancyConfig multiTenancy
ConnectionPoolServerConfig connectionPool
}
class LighthouseService {
-LighthouseAISDK sdk
-Logger logger
-string or undefined dbPath
-Map~string, StoredFile~ fileCache
-Map~string, Dataset~ datasetCache
+LighthouseService(apiKey string, logger Logger, dbPath string, poolConfig ConnectionPoolConfig)
+getSDKMetrics() any
}
class getSDKMetrics
LighthouseConfig --> ConnectionPoolConfig : uses
LighthouseAISDK --> LighthouseConfig : has
LighthouseAISDK --> ConnectionPool : optional_has
ConnectionPool --> ConnectionPoolConfig : configured_by
ConnectionPool --> ConnectionPoolStats : returns
ServerConfig --> ConnectionPoolServerConfig : has
LighthouseService --> LighthouseAISDK : composes
LighthouseService --> ConnectionPoolConfig : passes_to_pool
LighthouseService --> ServerConfig : configured_by
LighthouseService --> LighthouseConfig : builds
LighthouseService --> getSDKMetrics : returns_connectionPoolStats_via_SDK
LighthouseAISDK --> ConnectionPoolStats : exposes_via_getConnectionPoolStats
ConnectionPoolServerConfig --> ConnectionPoolConfig : maps_to_at_SDK_creation
File-Level Changes
Assessment against linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 5 issues
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location> `apps/mcp-server/src/config/server-config.ts:10-14` </location>
<code_context>
import * as path from "path";
import * as os from "os";
+export interface ConnectionPoolServerConfig {
+ maxConnections: number;
+ idleTimeoutMs: number;
+ requestTimeoutMs: number;
+ keepAlive: boolean;
+}
+
</code_context>
<issue_to_address>
**issue (bug_risk):** ConnectionPoolServerConfig shape doesn’t align with ConnectionPoolConfig used by the SDK
This type uses `idleTimeoutMs` / `requestTimeoutMs`, while `LighthouseConfig.pool` is `ConnectionPoolConfig`, which (per `LighthouseAISDK` usage) expects `idleTimeout`, `requestTimeout`, `acquireTimeout`, and `maxSockets`. If `ServerConfig.connectionPool` is passed through to `LighthouseService`/SDK, your timeout values will be ignored and required fields may be missing. Please either align this interface with `ConnectionPoolConfig` or explicitly map it to a `ConnectionPoolConfig` before calling `LighthouseAISDK`.
</issue_to_address>
### Comment 2
<location> `packages/sdk-wrapper/src/LighthouseAISDK.ts:370-379` </location>
<code_context>
+ * Execute an HTTP request using the connection pool if available,
+ * otherwise fall back to a direct axios call.
+ */
+ private async executeHttpRequest<T = any>(config: AxiosRequestConfig): Promise<AxiosResponse<T>> {
+ if (this.connectionPool) {
+ const instance = await this.connectionPool.acquire();
+ try {
+ return await instance.request<T>(config);
+ } finally {
+ this.connectionPool.release(instance);
+ }
+ } else {
+ const axiosLib: { request: (config: AxiosRequestConfig) => Promise<AxiosResponse<T>> } =
+ eval("require")("axios");
+ return axiosLib.request(config);
</code_context>
<issue_to_address>
**suggestion (performance):** Dynamic axios require in executeHttpRequest is repeated on every non-pooled call
In the non-pooled branch, `axios` is dynamically loaded via `eval("require")` on every call. Since `executeHttpRequest` is now on a hot path, this adds unnecessary overhead and can confuse bundlers. Consider caching the loaded `axios` instance (e.g., in a private field or module-level variable) so the dynamic require runs only once while preserving the indirection.
Suggested implementation:
```typescript
let cachedAxiosLib:
| { request: (config: AxiosRequestConfig) => Promise<AxiosResponse<any>> }
| null = null;
function getAxiosLib<T = any>(): {
request: (config: AxiosRequestConfig) => Promise<AxiosResponse<T>>;
} {
if (!cachedAxiosLib) {
// Use dynamic require to avoid bundler issues while caching the result
cachedAxiosLib = eval("require")("axios");
}
return cachedAxiosLib as {
request: (config: AxiosRequestConfig) => Promise<AxiosResponse<T>>;
};
}
/**
* Execute an HTTP request using the connection pool if available,
* otherwise fall back to a direct axios call.
*/
private async executeHttpRequest<T = any>(config: AxiosRequestConfig): Promise<AxiosResponse<T>> {
```
```typescript
} else {
const axiosLib = getAxiosLib<T>();
return axiosLib.request(config);
}
```
None required, assuming `AxiosRequestConfig` and `AxiosResponse` are already imported in this file (they are referenced in the existing code). The helper and cache are module-level so they will be shared across all uses in this module without further changes.
</issue_to_address>
### Comment 3
<location> `packages/sdk-wrapper/src/__tests__/LighthouseAISDK.test.ts:236-245` </location>
<code_context>
+ customSdk.destroy();
+ });
+
+ it("should use SDK timeout as pool requestTimeout default", () => {
+ const customTimeoutSdk = new LighthouseAISDK({
+ apiKey: "test-key",
+ timeout: 60000,
+ });
+
+ // Pool should be created with requestTimeout matching SDK timeout
+ const stats = customTimeoutSdk.getConnectionPoolStats();
+ expect(stats).not.toBeNull();
+
+ customTimeoutSdk.destroy();
+ });
+ });
</code_context>
<issue_to_address>
**issue (testing):** This test does not actually verify that the pool requestTimeout matches the SDK timeout and may be misleading.
The assertions only check that `getConnectionPoolStats()` is non-null, which doesn’t verify that `requestTimeout` actually matches the SDK `timeout` as the test name implies. Either make the timeout observable (e.g., expose pool config for tests and assert the values match) or add a behavioral check that would fail if `requestTimeout` differed (e.g., fake timers and asserting when a request times out). If neither is feasible, consider renaming the test to reflect what it truly validates to avoid a false sense of coverage.
</issue_to_address>
### Comment 4
<location> `packages/sdk-wrapper/src/__tests__/LighthouseAISDK.test.ts:191` </location>
<code_context>
});
});
+ describe("connection pool", () => {
+ it("should create a connection pool by default", () => {
+ const stats = sdk.getConnectionPoolStats();
</code_context>
<issue_to_address>
**suggestion (testing):** Missing integration tests for event forwarding and pooled vs non-pooled HTTP paths.
The new `LighthouseAISDK` behavior forwards pool events (`pool:acquire`, `pool:create`, `pool:queue`, `pool:release`, `pool:cleanup`) and conditionally routes HTTP calls through `ConnectionPool` vs direct axios. The current tests only cover stats and config wiring, not these behaviors.
To strengthen coverage, add tests that:
1. Event forwarding: stub `ConnectionPool` to emit an `"acquire"` (and/or other) event and assert the SDK emits the corresponding `"pool:acquire"` event with the same payload.
2. Request routing: with `pool` enabled, spy on `ConnectionPool.acquire`/`release` and assert they are called from a method that uses `executeHttpRequest` (e.g., `uploadViDirectAPI`); with `pool: false`, mock `axios.request` and assert it is called while `ConnectionPool.acquire` is not.
This will validate the wiring of the SDK layer to the pool and the correct bypass behavior when pooling is disabled.
Suggested implementation:
```typescript
describe("connection pool", () => {
it("should create a connection pool by default", () => {
const stats = sdk.getConnectionPoolStats();
expect(stats).not.toBeNull();
expect(stats).toEqual({
totalConnections: 0,
activeConnections: 0,
idleConnections: 0,
queuedRequests: 0,
totalRequests: 0,
averageWaitTime: 0,
});
});
it("should forward pool events with the correct payload", async () => {
// Arrange: listen for forwarded pool events on the SDK
const poolAcquireListener = vi.fn();
sdk.on("pool:acquire", poolAcquireListener);
// Act: simulate the underlying pool emitting an event
// NOTE: this assumes the SDK exposes the underlying pool instance via `getConnectionPool()`
// or a similar API. Adjust the accessor as needed.
const pool: any = (sdk as any).getConnectionPool
? (sdk as any).getConnectionPool()
: (sdk as any).connectionPool;
const payload = { requestId: "req-123", resource: "http" };
pool.emit("acquire", payload);
// Assert: SDK should forward the event with the same payload
expect(poolAcquireListener).toHaveBeenCalledTimes(1);
expect(poolAcquireListener).toHaveBeenCalledWith(payload);
});
it("should route HTTP calls through the connection pool when pooling is enabled", async () => {
// Arrange: spy on pool acquire/release and a method that uses executeHttpRequest
const pool: any = (sdk as any).getConnectionPool
? (sdk as any).getConnectionPool()
: (sdk as any).connectionPool;
const acquireSpy = vi.spyOn(pool, "acquire");
const releaseSpy = vi.spyOn(pool, "release");
// Act: call a method that uses executeHttpRequest internally
// NOTE: adjust the method name/arguments to match the real SDK API.
await sdk.uploadViDirectAPI({
file: Buffer.from("test"),
mimeType: "text/plain",
});
// Assert: request should have gone through the pool
expect(acquireSpy).toHaveBeenCalled();
expect(releaseSpy).toHaveBeenCalled();
});
it("should bypass the connection pool and call axios directly when pooling is disabled", async () => {
// Arrange: create a non-pooled SDK instance and spy on axios
// NOTE: adjust constructor/options shape to match the real SDK.
const nonPooledSdk = new LighthouseAISDK({
apiKey: "test-api-key",
pool: false,
});
const axiosRequestSpy = vi.spyOn(axios, "request");
const poolAcquireSpy = vi.spyOn(
(nonPooledSdk as any).connectionPool || {},
"acquire",
).mockImplementation?.(() => undefined as any);
// Act: call a method that uses executeHttpRequest internally
await nonPooledSdk.uploadViDirectAPI({
file: Buffer.from("test"),
mimeType: "text/plain",
});
// Assert: axios should be called directly and pool should not be used
expect(axiosRequestSpy).toHaveBeenCalled();
if (poolAcquireSpy) {
expect(poolAcquireSpy).not.toHaveBeenCalled();
}
axiosRequestSpy.mockRestore();
if (poolAcquireSpy) {
poolAcquireSpy.mockRestore();
}
});
```
1. Ensure the test file imports `axios` and `LighthouseAISDK` (and `vi`/`describe`/`it` from your test runner) at the top, if not already present:
- `import axios from "axios";`
- `import { LighthouseAISSDK as LighthouseAISDK } from "../path/to/LighthouseAISDK";` (adjust path/name as needed)
2. The tests assume:
- The SDK exposes the underlying pool via `sdk.getConnectionPool()` or a `connectionPool` property; if the actual accessor is different, update the references accordingly.
- `uploadViDirectAPI` is a method that internally uses `executeHttpRequest`; if another method is more appropriate, replace it in all three new tests.
- The SDK constructor accepts a `pool: false` option to disable pooling; adjust the configuration object to match the real API.
3. If `connectionPool` is not available on the non-pooled SDK instance, you may want to instead spy on the `ConnectionPool` class (e.g., `vi.spyOn(ConnectionPool.prototype, "acquire")`) to assert it is not called; update the test accordingly.
4. If the SDK uses a different event naming convention (e.g., `'pool:acquire'` vs `'pool.acquire'`) or forwards additional metadata, update the event names and expectations to match the implementation.
</issue_to_address>
### Comment 5
<location> `packages/sdk-wrapper/src/__tests__/LighthouseAISDK.test.ts:259-266` </location>
<code_context>
expect(removeAllListenersSpy).toHaveBeenCalled();
});
+
+ it("should cleanup connection pool on destroy", () => {
+ // After destroy, pool stats should still work (pool is destroyed but method handles it)
+ sdk.destroy();
+
+ // Creating a new SDK to verify pool was destroyed cleanly (no lingering timers)
+ const newSdk = new LighthouseAISDK(config);
+ expect(newSdk.getConnectionPoolStats()).not.toBeNull();
+ newSdk.destroy();
+ });
});
</code_context>
<issue_to_address>
**suggestion (testing):** The destroy test could more directly assert that the connection pool is cleaned up (e.g. timers or destroy call) rather than just creating a new SDK.
Right now this only verifies that creating a new SDK after `sdk.destroy()` still works, which doesn’t directly show that the original pool’s resources were cleaned up.
To better exercise the new cleanup behavior, you could:
- Use Jest fake timers and assert there are no pending timers after `sdk.destroy()` if the pool uses timeouts/intervals, or
- Spy on `ConnectionPool.prototype.destroy` (or the pool instance) and assert it’s called exactly once when `sdk.destroy()` runs.
That would more directly validate the `connectionPool.destroy()` path and resource cleanup semantics.
Suggested implementation:
```typescript
describe("destroy", () => {
it("should cleanup resources", () => {
const removeAllListenersSpy = jest.spyOn(sdk, "removeAllListeners");
const connectionPoolDestroySpy = jest.spyOn(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
(sdk as any).connectionPool,
"destroy",
);
sdk.destroy();
expect(removeAllListenersSpy).toHaveBeenCalled();
expect(connectionPoolDestroySpy).toHaveBeenCalledTimes(1);
});
});
```
If `connectionPool` is not directly accessible on `sdk` (e.g. it’s truly private or named differently), you will need to:
1. Adjust `(sdk as any).connectionPool` to the actual property path used in `LighthouseAISDK` for the pool instance, or
2. Import and spy on the pool implementation’s prototype (e.g. `jest.spyOn(ConnectionPool.prototype, "destroy")`) if that’s the established pattern elsewhere in the tests.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| export interface ConnectionPoolServerConfig { | ||
| maxConnections: number; | ||
| idleTimeoutMs: number; | ||
| requestTimeoutMs: number; | ||
| keepAlive: boolean; |
There was a problem hiding this comment.
issue (bug_risk): ConnectionPoolServerConfig shape doesn’t align with ConnectionPoolConfig used by the SDK
This type uses idleTimeoutMs / requestTimeoutMs, while LighthouseConfig.pool is ConnectionPoolConfig, which (per LighthouseAISDK usage) expects idleTimeout, requestTimeout, acquireTimeout, and maxSockets. If ServerConfig.connectionPool is passed through to LighthouseService/SDK, your timeout values will be ignored and required fields may be missing. Please either align this interface with ConnectionPoolConfig or explicitly map it to a ConnectionPoolConfig before calling LighthouseAISDK.
| private async executeHttpRequest<T = any>(config: AxiosRequestConfig): Promise<AxiosResponse<T>> { | ||
| if (this.connectionPool) { | ||
| const instance = await this.connectionPool.acquire(); | ||
| try { | ||
| return await instance.request<T>(config); | ||
| } finally { | ||
| this.connectionPool.release(instance); | ||
| } | ||
| } else { | ||
| const axiosLib: { request: (config: AxiosRequestConfig) => Promise<AxiosResponse<T>> } = |
There was a problem hiding this comment.
suggestion (performance): Dynamic axios require in executeHttpRequest is repeated on every non-pooled call
In the non-pooled branch, axios is dynamically loaded via eval("require") on every call. Since executeHttpRequest is now on a hot path, this adds unnecessary overhead and can confuse bundlers. Consider caching the loaded axios instance (e.g., in a private field or module-level variable) so the dynamic require runs only once while preserving the indirection.
Suggested implementation:
let cachedAxiosLib:
| { request: (config: AxiosRequestConfig) => Promise<AxiosResponse<any>> }
| null = null;
function getAxiosLib<T = any>(): {
request: (config: AxiosRequestConfig) => Promise<AxiosResponse<T>>;
} {
if (!cachedAxiosLib) {
// Use dynamic require to avoid bundler issues while caching the result
cachedAxiosLib = eval("require")("axios");
}
return cachedAxiosLib as {
request: (config: AxiosRequestConfig) => Promise<AxiosResponse<T>>;
};
}
/**
* Execute an HTTP request using the connection pool if available,
* otherwise fall back to a direct axios call.
*/
private async executeHttpRequest<T = any>(config: AxiosRequestConfig): Promise<AxiosResponse<T>> { } else {
const axiosLib = getAxiosLib<T>();
return axiosLib.request(config);
}None required, assuming AxiosRequestConfig and AxiosResponse are already imported in this file (they are referenced in the existing code). The helper and cache are module-level so they will be shared across all uses in this module without further changes.
| it("should use SDK timeout as pool requestTimeout default", () => { | ||
| const customTimeoutSdk = new LighthouseAISDK({ | ||
| apiKey: "test-key", | ||
| timeout: 60000, | ||
| }); | ||
|
|
||
| // Pool should be created with requestTimeout matching SDK timeout | ||
| const stats = customTimeoutSdk.getConnectionPoolStats(); | ||
| expect(stats).not.toBeNull(); | ||
|
|
There was a problem hiding this comment.
issue (testing): This test does not actually verify that the pool requestTimeout matches the SDK timeout and may be misleading.
The assertions only check that getConnectionPoolStats() is non-null, which doesn’t verify that requestTimeout actually matches the SDK timeout as the test name implies. Either make the timeout observable (e.g., expose pool config for tests and assert the values match) or add a behavioral check that would fail if requestTimeout differed (e.g., fake timers and asserting when a request times out). If neither is feasible, consider renaming the test to reflect what it truly validates to avoid a false sense of coverage.
| }); | ||
| }); | ||
|
|
||
| describe("connection pool", () => { |
There was a problem hiding this comment.
suggestion (testing): Missing integration tests for event forwarding and pooled vs non-pooled HTTP paths.
The new LighthouseAISDK behavior forwards pool events (pool:acquire, pool:create, pool:queue, pool:release, pool:cleanup) and conditionally routes HTTP calls through ConnectionPool vs direct axios. The current tests only cover stats and config wiring, not these behaviors.
To strengthen coverage, add tests that:
- Event forwarding: stub
ConnectionPoolto emit an"acquire"(and/or other) event and assert the SDK emits the corresponding"pool:acquire"event with the same payload. - Request routing: with
poolenabled, spy onConnectionPool.acquire/releaseand assert they are called from a method that usesexecuteHttpRequest(e.g.,uploadViDirectAPI); withpool: false, mockaxios.requestand assert it is called whileConnectionPool.acquireis not.
This will validate the wiring of the SDK layer to the pool and the correct bypass behavior when pooling is disabled.
Suggested implementation:
describe("connection pool", () => {
it("should create a connection pool by default", () => {
const stats = sdk.getConnectionPoolStats();
expect(stats).not.toBeNull();
expect(stats).toEqual({
totalConnections: 0,
activeConnections: 0,
idleConnections: 0,
queuedRequests: 0,
totalRequests: 0,
averageWaitTime: 0,
});
});
it("should forward pool events with the correct payload", async () => {
// Arrange: listen for forwarded pool events on the SDK
const poolAcquireListener = vi.fn();
sdk.on("pool:acquire", poolAcquireListener);
// Act: simulate the underlying pool emitting an event
// NOTE: this assumes the SDK exposes the underlying pool instance via `getConnectionPool()`
// or a similar API. Adjust the accessor as needed.
const pool: any = (sdk as any).getConnectionPool
? (sdk as any).getConnectionPool()
: (sdk as any).connectionPool;
const payload = { requestId: "req-123", resource: "http" };
pool.emit("acquire", payload);
// Assert: SDK should forward the event with the same payload
expect(poolAcquireListener).toHaveBeenCalledTimes(1);
expect(poolAcquireListener).toHaveBeenCalledWith(payload);
});
it("should route HTTP calls through the connection pool when pooling is enabled", async () => {
// Arrange: spy on pool acquire/release and a method that uses executeHttpRequest
const pool: any = (sdk as any).getConnectionPool
? (sdk as any).getConnectionPool()
: (sdk as any).connectionPool;
const acquireSpy = vi.spyOn(pool, "acquire");
const releaseSpy = vi.spyOn(pool, "release");
// Act: call a method that uses executeHttpRequest internally
// NOTE: adjust the method name/arguments to match the real SDK API.
await sdk.uploadViDirectAPI({
file: Buffer.from("test"),
mimeType: "text/plain",
});
// Assert: request should have gone through the pool
expect(acquireSpy).toHaveBeenCalled();
expect(releaseSpy).toHaveBeenCalled();
});
it("should bypass the connection pool and call axios directly when pooling is disabled", async () => {
// Arrange: create a non-pooled SDK instance and spy on axios
// NOTE: adjust constructor/options shape to match the real SDK.
const nonPooledSdk = new LighthouseAISDK({
apiKey: "test-api-key",
pool: false,
});
const axiosRequestSpy = vi.spyOn(axios, "request");
const poolAcquireSpy = vi.spyOn(
(nonPooledSdk as any).connectionPool || {},
"acquire",
).mockImplementation?.(() => undefined as any);
// Act: call a method that uses executeHttpRequest internally
await nonPooledSdk.uploadViDirectAPI({
file: Buffer.from("test"),
mimeType: "text/plain",
});
// Assert: axios should be called directly and pool should not be used
expect(axiosRequestSpy).toHaveBeenCalled();
if (poolAcquireSpy) {
expect(poolAcquireSpy).not.toHaveBeenCalled();
}
axiosRequestSpy.mockRestore();
if (poolAcquireSpy) {
poolAcquireSpy.mockRestore();
}
});- Ensure the test file imports
axiosandLighthouseAISDK(andvi/describe/itfrom your test runner) at the top, if not already present:import axios from "axios";import { LighthouseAISSDK as LighthouseAISDK } from "../path/to/LighthouseAISDK";(adjust path/name as needed)
- The tests assume:
- The SDK exposes the underlying pool via
sdk.getConnectionPool()or aconnectionPoolproperty; if the actual accessor is different, update the references accordingly. uploadViDirectAPIis a method that internally usesexecuteHttpRequest; if another method is more appropriate, replace it in all three new tests.- The SDK constructor accepts a
pool: falseoption to disable pooling; adjust the configuration object to match the real API.
- The SDK exposes the underlying pool via
- If
connectionPoolis not available on the non-pooled SDK instance, you may want to instead spy on theConnectionPoolclass (e.g.,vi.spyOn(ConnectionPool.prototype, "acquire")) to assert it is not called; update the test accordingly. - If the SDK uses a different event naming convention (e.g.,
'pool:acquire'vs'pool.acquire') or forwards additional metadata, update the event names and expectations to match the implementation.
| it("should cleanup connection pool on destroy", () => { | ||
| // After destroy, pool stats should still work (pool is destroyed but method handles it) | ||
| sdk.destroy(); | ||
|
|
||
| // Creating a new SDK to verify pool was destroyed cleanly (no lingering timers) | ||
| const newSdk = new LighthouseAISDK(config); | ||
| expect(newSdk.getConnectionPoolStats()).not.toBeNull(); | ||
| newSdk.destroy(); |
There was a problem hiding this comment.
suggestion (testing): The destroy test could more directly assert that the connection pool is cleaned up (e.g. timers or destroy call) rather than just creating a new SDK.
Right now this only verifies that creating a new SDK after sdk.destroy() still works, which doesn’t directly show that the original pool’s resources were cleaned up.
To better exercise the new cleanup behavior, you could:
- Use Jest fake timers and assert there are no pending timers after
sdk.destroy()if the pool uses timeouts/intervals, or - Spy on
ConnectionPool.prototype.destroy(or the pool instance) and assert it’s called exactly once whensdk.destroy()runs.
That would more directly validate the connectionPool.destroy() path and resource cleanup semantics.
Suggested implementation:
describe("destroy", () => {
it("should cleanup resources", () => {
const removeAllListenersSpy = jest.spyOn(sdk, "removeAllListeners");
const connectionPoolDestroySpy = jest.spyOn(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
(sdk as any).connectionPool,
"destroy",
);
sdk.destroy();
expect(removeAllListenersSpy).toHaveBeenCalled();
expect(connectionPoolDestroySpy).toHaveBeenCalledTimes(1);
});
});If connectionPool is not directly accessible on sdk (e.g. it’s truly private or named differently), you will need to:
- Adjust
(sdk as any).connectionPoolto the actual property path used inLighthouseAISDKfor the pool instance, or - Import and spy on the pool implementation’s prototype (e.g.
jest.spyOn(ConnectionPool.prototype, "destroy")) if that’s the established pattern elsewhere in the tests.
Resolve conflicts to keep both connection pooling (PR #75) and batch operations + memory management from main.
Summary
ConnectionPoolintoLighthouseAISDKso direct HTTP calls reuse pooled keep-alive connections instead of creating new ones per requestLighthouseConfig.pooland environment variables (LIGHTHOUSE_POOL_MAX_CONNECTIONS,LIGHTHOUSE_POOL_IDLE_TIMEOUT,LIGHTHOUSE_POOL_REQUEST_TIMEOUT,LIGHTHOUSE_POOL_KEEP_ALIVE)getConnectionPoolStats()and the MCP server'sgetSDKMetrics()Changes
packages/sdk-wrapper/src/types.tspoolconfig toLighthouseConfigpackages/sdk-wrapper/src/LighthouseAISDK.tsexecuteHttpRequesthelper, refactoreduploadViDirectAPIto use pool,getConnectionPoolStats(), pool cleanup indestroy()apps/mcp-server/src/services/LighthouseService.tsgetSDKMetrics()apps/mcp-server/src/config/server-config.tsConnectionPoolServerConfiginterface,DEFAULT_CONNECTION_POOL_CONFIGwith env var supportpackages/sdk-wrapper/src/__tests__/ConnectionPool.test.tspackages/sdk-wrapper/src/__tests__/LighthouseAISDK.test.tsTest plan
pnpm run build— all 7 packages compile successfullypnpm --filter @lighthouse-tooling/sdk-wrapper test— 32 new/updated tests passpool: falsedisables pooling without breaking functionalityCloses #54
Summary by Sourcery
Integrate HTTP connection pooling into the Lighthouse SDK and MCP server to reuse connections, expose pool metrics, and make pooling configurable.
New Features:
Enhancements:
Tests: