A short and accurate answer to this question, unfortunately, is impossible. To understand the pros and cons of different transaction sizes, you need to dive deep into the database engine and consider different features — the type of table (with or without a clustered index), the type of insertion (at the end of the table or random pages), the type of transaction isolation used at the moment.
I will try to explain briefly, to the best of my understanding. Please note that my answer will probably be inaccurate in some points and do not take into account certain features.
So, what happens in a transaction that inserts data:
- On the table into which the insertion occurs, a lock is imposed on the schema change (so that no one can change the definition of the table at the moment of insertion).
- On the lines that will be written to, a lock is placed on the update — such a lock will not allow you to read such lines in another transaction with certain levels of transaction isolation, for example, READ COMMITTED, until your transaction is completed. With a massive insert, block escalation may occur, and, for example, instead of rows, pages will be blocked or (in some cases) the entire table
- The records added to the tables are also written to the transaction log, while, until the transaction is completed, such records cannot be deleted from the log.
- When a transaction is completed, the log entry is marked complete, and can be deleted immediately with a SIMPLE recovery model or after creating a backup copy of the log with a FULL or BULK-logged model.
- The locks imposed by the transaction are removed.
As a result, in the pros of a short transaction - a short time blocking table entries, your users will most likely not notice delays in the work, and in the minuses - high relative overhead costs for creating and completing a transaction.
In the pros of a long transaction - low relative overhead for creating and completing a transaction, in minuses - an increase in waiting for data access, an increase in the transaction log, which can become a problem with very large transactions.
The increase in data access expectations can be "cured" by choosing a different transaction isolation level - for example, READ UNCOMMITED (but it has its negative consequences in the form of dirty read, for example), or READ COMMITTED SNAPSHOT (increases tempdb load).