|
| 1 | +--- |
| 2 | +title: MongoDB Baseline Testing |
| 3 | +weight: 6 |
| 4 | + |
| 5 | +### FIXED, DO NOT MODIFY |
| 6 | +layout: learningpathall |
| 7 | +--- |
| 8 | + |
| 9 | + |
| 10 | +### Baseline testing of MongoDB |
| 11 | +Perform baseline testing by verifying MongoDB is running, logging into the shell, executing a few test queries, and monitoring live performance. This ensures the database is functioning correctly before starting any benchmarks. |
| 12 | + |
| 13 | +1. Verify Installation & Service Health |
| 14 | + |
| 15 | +```console |
| 16 | +ps -ef | grep mongod |
| 17 | +mongod --version |
| 18 | +netstat -tulnp | grep 27017 |
| 19 | +``` |
| 20 | +- **ps -ef | grep mongod** – Checks if the MongoDB server process is running. |
| 21 | +- **mongod --version** – Shows the version of MongoDB installed. |
| 22 | +- **netstat -tulnp | grep 27017** – Checks if MongoDB is listening for connections on its default port 27017. |
| 23 | + |
| 24 | +You should see an output similar to: |
| 25 | + |
| 26 | +```output |
| 27 | +azureus+ 976 797 0 05:00 pts/0 00:00:00 grep --color=auto mongod |
| 28 | +db version v8.0.12 |
| 29 | +Build Info: { |
| 30 | + "version": "8.0.12", |
| 31 | + "gitVersion": "b60fc6875b5fb4b63cc0dbbd8dda0d6d6277921a", |
| 32 | + "openSSLVersion": "OpenSSL 3.3.3 11 Feb 2025", |
| 33 | + "modules": [], |
| 34 | + "allocator": "tcmalloc-google", |
| 35 | + "environment": { |
| 36 | + "distmod": "rhel93", |
| 37 | + "distarch": "aarch64", |
| 38 | + "target_arch": "aarch64" |
| 39 | + } |
| 40 | +} |
| 41 | +(Not all processes could be identified, non-owned process info |
| 42 | + will not be shown, you would have to be root to see it all.) |
| 43 | +tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 1113/./mongodb-linu |
| 44 | +``` |
| 45 | + |
| 46 | +2. Storage and Health Check |
| 47 | + |
| 48 | +```console |
| 49 | +ls -lh ~/mongodb-data/db |
| 50 | +``` |
| 51 | +Make sure MongoDB’s data files exist and test your disk’s read speed. You want steady, consistent performance. |
| 52 | + |
| 53 | +You should see an output similar to: |
| 54 | + |
| 55 | +```output |
| 56 | +total 6.5M |
| 57 | +-rw-------. 1 azureuser azureuser 50 Aug 8 10:54 WiredTiger |
| 58 | +-rw-------. 1 azureuser azureuser 21 Aug 8 10:54 WiredTiger.lock |
| 59 | +-rw-------. 1 azureuser azureuser 1.5K Aug 11 13:05 WiredTiger.turtle |
| 60 | +-rw-------. 1 azureuser azureuser 100K Aug 11 13:05 WiredTiger.wt |
| 61 | +-rw-------. 1 azureuser azureuser 4.0K Aug 11 13:05 WiredTigerHS.wt |
| 62 | +-rw-------. 1 azureuser azureuser 36K Aug 11 13:05 _mdb_catalog.wt |
| 63 | +-rw-------. 1 azureuser azureuser 3.1M Aug 11 13:05 collection-0-18324942683865842057.wt |
| 64 | +-rw-------. 1 azureuser azureuser 20K Aug 11 13:05 collection-0-2816474184925722673.wt |
| 65 | +-rw-------. 1 azureuser azureuser 36K Aug 11 13:05 collection-2-2816474184925722673.wt |
| 66 | +-rw-------. 1 azureuser azureuser 12K Aug 11 13:05 collection-4-2816474184925722673.wt |
| 67 | +-rw-------. 1 azureuser azureuser 1.3M Aug 11 13:05 collection-62-18324942683865842057.wt |
| 68 | +-rw-------. 1 azureuser azureuser 4.0K Aug 8 12:41 collection-7-2816474184925722673.wt |
| 69 | +-rw-------. 1 azureuser azureuser 12K Aug 11 13:05 collection-82-18324942683865842057.wt |
| 70 | +drwx------. 2 azureuser azureuser 4.0K Aug 11 13:05 diagnostic.data |
| 71 | +-rw-------. 1 azureuser azureuser 1.4M Aug 11 13:05 index-1-18324942683865842057.wt |
| 72 | +-rw-------. 1 azureuser azureuser 20K Aug 11 13:05 index-1-2816474184925722673.wt |
| 73 | +-rw-------. 1 azureuser azureuser 36K Aug 11 13:05 index-3-2816474184925722673.wt |
| 74 | +-rw-------. 1 azureuser azureuser 12K Aug 11 13:05 index-5-2816474184925722673.wt |
| 75 | +-rw-------. 1 azureuser azureuser 36K Aug 11 13:05 index-6-2816474184925722673.wt |
| 76 | +-rw-------. 1 azureuser azureuser 504K Aug 11 13:05 index-71-18324942683865842057.wt |
| 77 | +-rw-------. 1 azureuser azureuser 4.0K Aug 8 12:41 index-8-2816474184925722673.wt |
| 78 | +-rw-------. 1 azureuser azureuser 12K Aug 11 13:05 index-83-18324942683865842057.wt |
| 79 | +-rw-------. 1 azureuser azureuser 4.0K Aug 8 12:41 index-9-2816474184925722673.wt |
| 80 | +drwx------. 2 azureuser azureuser 4.0K Aug 11 04:10 journal |
| 81 | +-rw-------. 1 azureuser azureuser 0 Aug 11 13:05 mongod.lock |
| 82 | +-rw-------. 1 azureuser azureuser 36K Aug 11 13:05 sizeStorer.wt |
| 83 | +-rw-------. 1 azureuser azureuser 114 Aug 8 10:54 storage.bson |
| 84 | +``` |
| 85 | + |
| 86 | +Run the command below to check how fast your storage can **randomly read small 4KB chunks** from a 100 MB file for 30 seconds, using one job, and then show a summary report: |
| 87 | + |
| 88 | +```console |
| 89 | +fio --name=baseline --rw=randread --bs=4k --size=100M --numjobs=1 --time_based --runtime=30 --group_reporting |
| 90 | +``` |
| 91 | +You should see an output similar to: |
| 92 | + |
| 93 | +```output |
| 94 | +baseline: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 |
| 95 | +fio-3.37 |
| 96 | +Starting 1 process |
| 97 | +baseline: Laying out IO file (1 file / 100MiB) |
| 98 | +Jobs: 1 (f=1): [r(1)][100.0%][r=19.8MiB/s][r=5058 IOPS][eta 00m:00s] |
| 99 | +baseline: (groupid=0, jobs=1): err= 0: pid=1065: Tue Aug 12 05:04:48 2025 |
| 100 | + read: IOPS=5201, BW=20.3MiB/s (21.3MB/s)(610MiB/30001msec) |
| 101 | + clat (usec): min=83, max=21382, avg=191.64, stdev=106.48 |
| 102 | + lat (usec): min=83, max=21382, avg=191.68, stdev=106.48 |
| 103 | + clat percentiles (usec): |
| 104 | + | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 100], 20.00th=[ 139], |
| 105 | + | 30.00th=[ 155], 40.00th=[ 169], 50.00th=[ 225], 60.00th=[ 229], |
| 106 | + | 70.00th=[ 233], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 269], |
| 107 | + | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 412], 99.95th=[ 465], |
| 108 | + | 99.99th=[ 635] |
| 109 | + bw ( KiB/s): min=17888, max=22896, per=100.00%, avg=20815.73, stdev=1085.63, samples=59 |
| 110 | + iops : min= 4472, max= 5724, avg=5203.93, stdev=271.41, samples=59 |
| 111 | + lat (usec) : 100=10.12%, 250=81.30%, 500=8.55%, 750=0.02%, 1000=0.01% |
| 112 | + lat (msec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.01% |
| 113 | + cpu : usr=0.27%, sys=3.38%, ctx=156062, majf=0, minf=7 |
| 114 | + IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% |
| 115 | + submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% |
| 116 | + complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% |
| 117 | + issued rwts: total=156056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 |
| 118 | + latency : target=0, window=0, percentile=100.00%, depth=1 |
| 119 | +Run status group 0 (all jobs): |
| 120 | + READ: bw=20.3MiB/s (21.3MB/s), 20.3MiB/s-20.3MiB/s (21.3MB/s-21.3MB/s), io=610MiB (639MB), run=30001-30001msec |
| 121 | +Disk stats (read/write): |
| 122 | + sda: ios=155988/70, sectors=1247904/1016, merge=0/31, ticks=29060/28, in_queue=29096, util=95.44% |
| 123 | +``` |
| 124 | +The output shows how fast it read data (**20.3 MB/s**) and how many reads it did per second (**~5200 IOPS**), which tells you how responsive your storage is for random reads. |
| 125 | + |
| 126 | +3. Connectivity and CRUD Sanity Check |
| 127 | + |
| 128 | +```console |
| 129 | +mongosh --host localhost --port 27017 |
| 130 | +``` |
| 131 | + |
| 132 | +Inside shell: |
| 133 | + |
| 134 | +```javascript |
| 135 | +use baselineDB |
| 136 | +db.testCollection.insertOne({ name: "baseline-check", value: 1 }) |
| 137 | +db.testCollection.find() |
| 138 | +db.testCollection.updateOne({ name: "baseline-check" }, { $set: { value: 2 } }) |
| 139 | +db.testCollection.deleteOne({ name: "baseline-check" }) |
| 140 | +exit |
| 141 | +``` |
| 142 | +These commands create a test record, read it, update its value, and then delete it a simple way to check if MongoDB’s basic **add, read, update, and delete** operations are working. |
| 143 | + |
| 144 | +You should see an output similar to: |
| 145 | + |
| 146 | +```output |
| 147 | +test> use baselineDB |
| 148 | +switched to db baselineDB |
| 149 | +baselineDB> db.testCollection.insertOne({ name: "baseline-check", value: 1 }) |
| 150 | +{ |
| 151 | + acknowledged: true, |
| 152 | + insertedId: ObjectId('689acdae6a86b49bca74e39a') |
| 153 | +} |
| 154 | +baselineDB> db.testCollection.find() |
| 155 | +[ |
| 156 | + { |
| 157 | + _id: ObjectId('689acdae6a86b49bca74e39a'), |
| 158 | + name: 'baseline-check', |
| 159 | + value: 1 |
| 160 | + } |
| 161 | +] |
| 162 | +baselineDB> db.testCollection.updateOne({ name: "baseline-check" }, { $set: { value: 2 } }) |
| 163 | +... |
| 164 | +{ |
| 165 | + acknowledged: true, |
| 166 | + insertedId: null, |
| 167 | + matchedCount: 1, |
| 168 | + modifiedCount: 1, |
| 169 | + upsertedCount: 0 |
| 170 | +} |
| 171 | +baselineDB> db.testCollection.deleteOne({ name: "baseline-check" }) |
| 172 | +... |
| 173 | +{ acknowledged: true, deletedCount: 1 } |
| 174 | +``` |
| 175 | + |
| 176 | +4. Basic Query Performance Test |
| 177 | + |
| 178 | +```console |
| 179 | +mongosh --eval ' |
| 180 | +db = db.getSiblingDB("baselineDB"); |
| 181 | +for (let i=0; i<1000; i++) { db.perf.insertOne({index:i, value:Math.random()}) }; |
| 182 | +var start = new Date(); |
| 183 | +db.perf.find({ value: { $gt: 0.5 } }).count(); |
| 184 | +print("Query Time (ms):", new Date() - start); |
| 185 | +' |
| 186 | +``` |
| 187 | +The command connected to MongoDB, switched to the **baselineDB** database, inserted **1,000 documents** into the perf collection, and then measured the execution time for counting documents where **value > 0.5**. The final output displayed the **query execution time** in milliseconds. |
| 188 | + |
| 189 | +You should see an output similar to: |
| 190 | + |
| 191 | +```output |
| 192 | +Query Time (ms): 2 |
| 193 | +``` |
| 194 | + |
| 195 | +5. Index Creation Speed Test |
| 196 | + |
| 197 | +```console |
| 198 | +mongosh --eval ' |
| 199 | +db = db.getSiblingDB("baselineDB"); |
| 200 | +var start = new Date(); |
| 201 | +db.perf.createIndex({ value: 1 }); |
| 202 | +print("Index Creation Time (ms):", new Date() - start); |
| 203 | +' |
| 204 | +``` |
| 205 | +The test connected to MongoDB, switched to the **baselineDB** database, and created an index on the **value** field in the **perf** collection. The index creation process completed in **38 milliseconds**, indicating relatively fast index building for the dataset size. |
| 206 | + |
| 207 | +You should see an output similar to: |
| 208 | + |
| 209 | +```output |
| 210 | +Index Creation Time (ms): 38 |
| 211 | +``` |
| 212 | + |
| 213 | +6. Concurrency Smoke Test |
| 214 | + |
| 215 | +```console |
| 216 | +for i in {1..5}; do |
| 217 | + /usr/bin/mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' & |
| 218 | +done |
| 219 | +wait |
| 220 | +``` |
| 221 | +This command runs **five MongoDB insert jobs at the same time**, each adding **1,000 new records** to the **baselineDB.concurrent** collection. |
| 222 | +It’s a quick way to test how MongoDB handles **multiple users writing data at once**. |
| 223 | + |
| 224 | +You should see an output similar to: |
| 225 | + |
| 226 | +```output |
| 227 | +[1] 1281 |
| 228 | +[2] 1282 |
| 229 | +[3] 1283 |
| 230 | +[4] 1284 |
| 231 | +[5] 1285 |
| 232 | +switched to db baselineDB; |
| 233 | +switched to db baselineDB; |
| 234 | +[1] Done /usr/bin/mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' |
| 235 | +switched to db baselineDB; |
| 236 | +[2] Done /usr/bin/mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' |
| 237 | +[3] Done /usr/bin/mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' |
| 238 | +switched to db baselineDB; |
| 239 | +switched to db baselineDB; |
| 240 | +[4]- Done /usr/bin/mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' |
| 241 | +[5]+ Done /usr/bin/mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' |
| 242 | +``` |
| 243 | + |
| 244 | +**Five parallel MongoDB shell sessions** were executed, each inserting **1,000** test documents into the baselineDB.concurrent collection. All sessions completed successfully, confirming that concurrent data insertion works as expected. |
| 245 | + |
| 246 | +The above operations confirm that MongoDB is installed successfully and is functioning as expected on the Azure Cobalt 100 (Arm64) environment. |
| 247 | + |
| 248 | +Now, your MongoDB instance is ready for further benchmarking and production use. |
0 commit comments