Small Tips to Gain Big Performance

Sysadmin can run any scripts on SQL Server. What if S/he runs scripts with bad results. Can you detect it? Yes, Just Alter your Audit specification: ALTER SERVER AUDIT [Audit-xxxxxx] WHERE ([object_name]=’sysadmin’)
There are four different types of checkpoints: Internal checkpointsInternal checkpoints occur during some SQL Server operations, such as starting database backup or creating a database snapshot.Manual checkpointsManual checkpoint occur manually, as the name indicates, when users trigger them with the CHECKPOINT command.Automatic checkpointHistorically, SQL Server used automatic checkpoints, with the recovery interval controlled at the server level. The checkpoint process wakes up once or few times each recovery interval and flushes dirty data pages to disk. Indirect checkpointStarting with SQL Server 2012, you have another option: indirect checkpoint. With this method, SQL Server tries to balance I/O load by executing checkpoints much more frequently – in some cases, evencontinuously.
Do not spread clustered indexes, and nonclustered indexes, or large object (LOB) data across multiple filegroups. This rarely helps with performance and may introduce issues in cases of database corruption.
Create multiple data files in filegroups with volatile data. Make sure that all data files have the same sizeand auto-growth parameters specified in MB; enable the AUTOGROW_ALL_FILES option.
SQL Server splits transaction logs into chunks called Virtual Log Files (VLF) and manages them as single units. For example, SQL Server cannot truncate and reuse a VLF if it contains just a single active log record. Pay attention to the number of VLFs in the database. Too few of them will lead to very large VLFs, which make log management and truncation suboptimal. Too many small VLFs will degrade the performance of transaction log operations. Try not to exceed several hundred VLFs in production systems.You can count the VLFs in the database with sys.dm_db_log_info data management view in SQL Server 2016 and above. In older versions of SQL Server, you can obtain the information by running DBCC LOGINFO.
When this option is enabled, SQL Server periodically shrinks the database and releases unused free space from the files to the OS. But in case of first INSERT database tries to extend depending on growth size you had implemented. Than again shrinks because of auto-shrink. So it is better not to set this parameter for a database.
Set MAXDOP to 1/4 of the number of available CPUs in OLTP and half those in Data Warehouse systems. Do not exceed the number of CPUs in the NUMA node.
This configuration option controls how SQL Server caches execution plans of ad-hoc (nonparameterized)queries. When it is disabled (by default), SQL Server caches full execution plans of those statements, which may significantly increase plan cache memory usage. As the opposite, when this setting is enabled,SQL Server starts by caching the small structure (just a few hundred bytes) called plan stub, replacing it with the full execution plan if an ad-hoc query is executed the second time.
If the server has 8 or fewer CPU cores, create the same number of data files.If the server has more than 8 CPU cores, use either 8 data files or 1/4 of the number of cores, whichever is greater, rounding up in batches of 4 files.For example, use 8 data files in the 24-coreserver and 12 data files in the 40-core server.
Make Sure that all TEMPDB data files have the same initial size and auto-growth parameters specified in megabytes (MB) rather than percentage.