Skip to content
Bart Read edited this page Oct 30, 2015 · 6 revisions

##Do my objects need to implement a property for every column in the corresponding database table?

No, not at all. Dapper.SimpleSave doesn't pay any attention to the actual database schema, deriving all the information it needs from your .NET types and the attribute decorations you've added to them.

##Can I customise the SQL that Dapper.SimpleSave generates?

No, this is not supported. Hopefully it'll never be supported. If you find yourself in a situation where this is absolutely necessary you may wish to consider alternative strategies (e.g., write raw Dapper or ADO.NET code, switch to EF or NHibernate for an isolate sub-domain within your app).

##Why do I need to provide the old object when I want to save changes to an existing object (i.e., do an UPDATE)? Isn't this inefficient?

One of the assumptions that we built Dapper.SimpleSave on is that roundtrips to the database are relatively expensive compared with computation time within a process boundary. This takes into account fallacies 2 (latency is zero) and 7 (transport cost is zero) of the Fallacies of distributed computing.

We therefore want to minimise the number of such roundtrips. Or, in other words, if we view our service layer and the database as separate nodes in a distributed system, we want to minimise the chattiness of the protocol they use to communicate by keeping things as course grained as possible. Why UPDATE or DELETE rows in different tables one at a time when you could do ten at once?

Unfortunately this logic doesn't hold for INSERTs due to the need to retrieve primary key values for newly inserted rows and populate the objects to which they map so that these values can then be reused in subsequent INSERTs or UPDATEs within the same transaction, our objects will be in a correct state should we wish to continue using them after our database transaction has completed.

Sometimes we can still bundle INSERTs in with other DDL statements but they will always be the last or last but one statement in any batch of SQL we send to the server. For numeric PKs the very last statement in a batch is a command that retrieves the PK value of the inserted row.

Getting back to the original question therefore, we need the old version of the object so that we can compare it with the new version to find out what has changed. As already described, this then allows us to build the SQL batches we need to save those changes to the database.

All of that being said, in Dapper.SimpleSave 1.0.138 onwards, you can now avoid specifying the old object as long as:

  1. The object already has a corresponding row in the database

  2. The object has a primary key value set that matches the value of the database row described in (1).

  3. You don't mind that the UPDATE will SET the values of all columns for which properties are defined.

If (1) isn't true then a new row will be inserted and, if the primary key is generated by the database, a new primary key will be generated for the object.

In theory then, you can do either of the following to UPDATE an object:

// Method 1: treat your UPDATE as though it's an INSERT.
connection.Create(existingObject);

// Method 2: make it explicit that you're most likely doing an UPDATE
connection.Update(null, existingObject);

Both of these will appear to work, and for some objects may perform better than the scenario where you need to supply the old object. Neither is recommended though, at least not for anything but the simplest of objects.

Why?

Because if only a new object has been supplied Dapper.SimpleSave has no way to know whether objects have been added to or deleted from child collections, which may lead to incorrect results further down the line if an attempt is made to update them in this way.

This mechanism is intended to deal only with a scenario where a row may contain updates but related rows are to be ignored. E.g., inserting a new parent object where one of its child objects already exists in the database but may have been modified along with the parent INSERT.

##Why do I need to provide an object, rather than just a primary key value, to the Delete method?

Remember that Dapper.SimpleSave is concerned with saving hierarchies of objects to a database. Therefore, to correctly execute a DELETE it needs the complete object hierarchy, with you passing in the reference to the top level object. For example, a User.

In this case a single DELETE may well involve multiple DELETEs and UPDATEs across multiple tables. Dapper.SimpleSave can't work out what these should be from the primary key alone - it needs the whole object.

Moreover, Dapper.SimpleSave only deals with writing objects to the database. It has no idea how to read them - that's what Dapper and, in particular, its multi-mapping functionality are for (or, better, Dapper.SimpleLoad if you need to make use of multi-mapping). Because of this, if you give it only a primary key value, it has no idea how to rebuild the complete object hierarchy, so can't execute a correct transaction to delete that object and possibly its children.

(Aside: if you've implemented a repository pattern you're obviously perfectly free to implement Delete() methods on them that take only a primary key value, as long as you pass the object itself - either from a cache, or freshly loaded from the database - into Dapper.SimpleSave.)

##Where do I get the old object from?

It depends:

  • It might be acceptable to retrieve it directly from the database, assuming this is inexpensive (this should be true if your database is well optimised, and you can pull it back with a single query),

  • Alternatively, if roundtripping is an issue you might have implemented some form of middleware caching, in which case you should pull the old object from there, and replace it with the new after a successful save.

  • If you know you can only have changed a small subset of properties from, for example, a simple view model, you can create the new object by copying the old one using a copy constructor, or a library such as AutoMapper, and then apply your potentially modified properties to the new object. Again, this works particularly well in a scenario where caching is used.

Clone this wiki locally