The Sync Service is good at maintaining connections between objects, and synchronising data between them. What it has never been so good at is constructing data from complex rules and lookups, so as much as possible, do the complex processing outside the Sync Service and present the data in a way that it can use most efficiently.
Let me illustrate with an example. Say your HR system manages the organisation hierarchy on position rather than employee, so to get a person’s manager you need to extrapolate it from position numbers. You could import positions as a separate Metaverse object type and then do some clever thing with FindMVEntries, but should you? For a small environment you’ll get away with it. For a large environment the efficiency hit could be crippling.
Far better to use SQL which is great at data crunching. Pass the employee and position data via a SQL table, using SQL scripts to finesse the data into the format most suited to the Sync Service. While you’re at it you could throw in a Delta table and now you have fast, efficient imports and syncs, all while using the OOB SQL MA.
Another way I like to use this SQL-based approach is when the data gathering for the MA takes a while. I once wrote an XMA for BPOS (pre-Office365) to go off and gather information about BPOS users and mailboxes via PowerShell – before realising that it was just too slow. Better to run the PowerShell scripts separately, put the data into a SQL table, and let FIM import it from there.
The Portal and Declarative Sync improve this situation in a number of ways. Declarative rules favour simple flows, and we can use the Portal to prepare data, such as extrapolating a group membership from people’s attributes. But the need for a presentation layer between FIM Sync and other data sources is not going away, and nor should it. Applications and directories in all their variations will continue to be part of FIM’s ecosystem, and we should use the best tools available to get the identity data moving efficiently through.