Skip to content
Ethan edited this page Jun 28, 2017 · 11 revisions

Features needed for 3.0

  • Pagination & filtering

    • just pass settings to adapter
    • make sure we're not opening up query injection vulnerabilities when we handle this info in the adapter. or, ideally, if we define a standard format, we can do some of this prevention before the data reaches the adapter.
    • since i don't have a format for this yet, using waterline's probably makes the most sense.
  • Adapters

  • Transforms

    • allow user to transform the query that's built from the request, not just the resources sent to/returned from the db. E.g., we can filter on some virtuals (those that are simple aliases) by transforming the query so the query contains a where that uses the field names as they're stored in the db (#103). See also #110.
      • label mappers can be subsumed by this, and we'll get a performance boost. instead of doing a separate query for the matching ids, they just add a where clause to the query. Usually, it'll be something simple like (where status=drafts or where time>=date.now()) but, in the degenerate case, it could just be a list of ids to. this is more performant, more consistent (saves a race condition between queries) and more elegant to implement.
      • Note: this could be the solution to #104 as well: at this "transform the query" stage, we modify the query to remove requests for any protected fields, and then remove any includes for those removed fields. If the fields to remove (with a predicate fn testing whether the user has access) were defined declaratively, we could do this automatically to minimize the bugs.
    • for transforming response content, obv. linkage needs to be transformable, but...
      • how does the notion of transforming linkage fit with the notion of transforming the request-derived query? I'd imagine that anyone transforming linkage (i.e., types and ids) might want to transform the type/id parsed from the url in the same way
    • but what about carl's use cases of using these transforms to fail requests? is this the right way to fail a delete request? what about how to fail (e.g. 403) a request to access a single resource, in the case that a request for a collection containing that resource would succeed (just with the resource omitted)?
  • Dbs, transactions, multiple queries

    • Add a waterline adapter so we can postgres? Maybe just support postgres with knex? (But populate is nice, and this library kinda assumes it'll have a model to read... even though the downside is that waterline doesn't seem to support a lot of postgres-specific goodness, and transactions are still undocumented)
    • If we have every request map to one adapter function call, the adapter function calls can enforce atomicity as well as the backing database can. In postgres, that can be a pure transaction. In mongo, we can at least do pre-checks where possible to minimize chance that one of the requests will fail? (E.g. we could address #121 by calling validate on all docs before saving any of them.)
    • Questions:
      • Is there a more robust way to do transactions in mongo? See Adam's email
        • we can apply the idea of a two-phase commit in mongo, treating each atomic document update like we would a separate database in the normal distributed, two-phase setup. this does leave open the possibility, though, of the app seeing intermediate results.
        • See https://www.npmjs.com/package/fawn
      • What if we want to do multiple adapter methods across a single transaction? Could we make something like
        const t = adapter.transaction();
        t.do(...); t.do(); 
        t.execute().then(result => {}, err => {});
        
        • The idea behind this is that some requests might need multiple queries. See #110.
        • This is not that hard, and it's supported in Waterline: https://github.com/balderdashy/waterline/issues/62
          • Note that waterline doesn't support transactions across different adapters yet, I don't think, and that's hard to do conceptually--though there are lots of caveats in practice. All the adapters would need to support transactions individually for their portion of the larger transaction, and then you could use something like the two phase commit protocol (basically, asking all adapters whether their individual transaction worked; applying all transactions if so, and rolling them all back otherwise).
        • Fortune does something similar: http://fortune.js.org/api/#adapter-begintransaction
        • Maybe hink of it as Request -> Query[] (with transformed payload, query) -> Results -> Transformed Results -> Response
  • Nested includes

  • Add features that JSON:API has been slow to add, but that have just proven essential...

Clone this wiki locally