The Problem With Clicking Your Way Through 3,000 Rows
A client came to us mid-migration last year with a familiar problem. They had 11 SharePoint lists and roughly 3,000 rows of legacy data sitting in CSV exports. Their previous vendor’s solution? Copy-paste. Row by row. Through the browser.
That’s not a migration plan. That’s how you build RSI.
Bulk importing data into SharePoint 2016 lists using PowerShell is the right call for anything beyond a dozen rows. It’s repeatable, auditable, and doesn’t care whether you’re importing 50 records or 50,000. The SharePoint object model exposes everything you need through the Microsoft.SharePoint.PowerShell snap-in, and once you understand how list items work, the pattern is dead simple.
Why PowerShell Beats Third-Party Import Tools
I’ll be direct: there are third-party tools that claim to handle SharePoint imports. Some of them are fine. Most of them cost money, need to be installed on every admin workstation, and add a dependency you don’t need.
PowerShell is already there. It runs on the SharePoint box. The snap-in ships with the product. You write the script once, store it in your runbooks, and any sysadmin on the team can run it without installing anything.
The counterargument is that some GUI tools handle column mapping with a drag-and-drop interface. Fair point — that’s genuinely easier for non-technical staff. But if you’re reading this, you’re probably the person being asked to do the import, and you want it done correctly and repeatably. Every time.
What You Need Before Running Anything
A few prerequisites before you touch the script:
- Run this on the SharePoint server itself, or on a box with the SharePoint Management Shell installed. The snap-in isn’t available on random workstations.
- Your account needs at minimum Contribute permissions on the target list.
- Your source data should be a clean CSV with consistent headers. Mixed date formats in a single column will destroy you at 2AM.
- Know the internal column names, not the display names. SharePoint internal names and display names diverge constantly, especially after someone renamed a column three years ago and nobody documented it.
One more thing: back up before you bulk import. If you’re doing this on a production environment, make sure you have a current snapshot or content database backup in place. We’ve had clients skip this step, then discover their CSV had 200 duplicate rows — not a fun conversation. If you’re also running Microsoft 365 workloads, take a look at proper Office 365 backup coverage while you’re thinking about data protection. On-prem SP2016 is a separate beast, but the principle is the same.
The Param Block
Start with parameters. Always. A script that hardcodes site URLs or list names is a script you’ll have to rewrite every time someone points it at a different environment.
param(
[Parameter(Mandatory=$true)]
[string]$SiteUrl,
[Parameter(Mandatory=$true)]
[string]$ListName,
[Parameter(Mandatory=$true)]
[string]$CsvPath,
[Parameter(Mandatory=$false)]
[switch]$WhatIf
)
The $WhatIf switch is optional but worth including. It lets you do a dry run — validate your CSV structure and column mapping before writing a single item. Same pattern I covered in the GPO backup scripts article: validate first, commit second.
Loading the SharePoint Snap-In
The Microsoft.SharePoint.PowerShell snap-in is what gives you access to the full SharePoint object model from a PowerShell session. It loads automatically in the SharePoint Management Shell, but if you’re running a scheduled task or calling this script from an orchestration layer, you need to load it explicitly.
if ((Get-PSSnapin -Name "Microsoft.SharePoint.PowerShell" -ErrorAction SilentlyContinue) -eq $null) {
Add-PSSnapin "Microsoft.SharePoint.PowerShell"
}
The conditional check prevents errors if the snap-in is already loaded in the session. Small thing. Gets annoying fast if you skip it.
Connecting to the List
The SharePoint object model follows a strict hierarchy: Farm → Web Application → Site Collection → Web → List → List Items. To reach a list, you walk down that chain. For an import script, you only need to go as far as SPWeb and then pull the list by name.
$web = Get-SPWeb -Identity $SiteUrl
$list = $web.Lists[$ListName]
if ($null -eq $list) {
Write-Error "List '$ListName' not found at $SiteUrl"
$web.Dispose()
exit 1
}
Call $web.Dispose() when you’re finished. SPWeb objects hold unmanaged memory resources. In an interactive session it’s not a disaster, but in a script that runs on a schedule against a production farm you’ll notice the leak eventually. The Microsoft SharePoint Server PowerShell documentation covers the full cmdlet reference for SP2016 — bookmark it for column type edge cases.
The Import Loop
This is where the actual work happens. Import your CSV, loop through each row, create a new list item, assign field values, call .Update(). That’s the whole pattern.
$data = Import-Csv -Path $CsvPath
$count = 0
$errors = 0
foreach ($row in $data) {
try {
if ($WhatIf) {
Write-Host "[WHATIF] Would add: $($row.Title)"
continue
}
$item = $list.Items.Add()
# Map CSV columns to SharePoint internal field names
$item["Title"] = $row.Title
$item["Department"] = $row.Department
$item["ContactEmail"] = $row.Email
$item["StartDate"] = [datetime]::Parse($row.StartDate)
$item.Update()
$count++
if ($count % 100 -eq 0) {
Write-Host "Processed $count rows..."
}
}
catch {
Write-Warning "Error on row $($count + $errors + 1): $_"
$errors++
}
}
$web.Dispose()
Write-Host "Done. Added: $count | Errors: $errors"
A few things worth calling out. Date fields need explicit parsing — SharePoint won’t accept a raw string for a date column type. Use [datetime]::Parse() and make sure your CSV date format is consistent throughout. Mixed formats in a single column will produce partial failures that are annoying to debug after the fact.
The progress report every 100 rows keeps you sane on large imports. You want feedback that the script is running, not 20 minutes of a blank terminal followed by existential doubt.
Handling Errors Without Losing Your Progress
After inheriting a SharePoint 2016 environment from another vendor last quarter, we found a list in a partially-imported state. The previous team had attempted a bulk import, hit a bad row, and their script terminated immediately — no error handling, no recovery. Roughly 800 of 2,400 rows had been written. Nobody knew which 800.
Wrap your item creation in a try/catch. Log the failures. Keep moving. A handful of malformed rows shouldn’t stop the other 2,300 from getting imported. Write failures to a separate CSV so you can fix and re-run just those rows:
catch {
$errorEntry = [PSCustomObject]@{
RowNumber = ($count + $errors + 1)
Title = $row.Title
Error = $_.Exception.Message
}
$errorEntry | Export-Csv -Path ".\import_errors.csv" -Append -NoTypeInformation
$errors++
}
Now you have a clean record of exactly what failed and why. Reimport the error file after fixing the data.
Performance on Large Datasets
For anything over 5,000 rows, think about the write pattern. Calling $item.Update() after each row is a discrete database write every single time. For moderate imports this is fine. For very large datasets against a farm that’s also handling production traffic, consider batching or pacing the writes.
If you’re running concurrent operations elsewhere during the import — scheduled jobs, search crawls, workflow engine activity — you’ll want to think about resource contention. The same thinking that applies to concurrent PowerShell network operations applies here: understand your load before you hammer shared infrastructure.
One more: don’t import into a list with active workflows attached if you can help it. Bulk inserts triggering dozens of simultaneous workflow instances will make the SharePoint server unhappy in very visible ways.
What This Won’t Handle
Honest caveats before you run this in prod:
- Lookup columns: Assigning a lookup field requires the target item’s ID in the lookup list — not the display value. You’ll need to query the lookup list first and resolve IDs. Adds complexity.
- Managed metadata / Term Store: Fields backed by the term store require special handling via the
TaxonomyFieldclass. The string assignment shown above won’t work here. - People and Group columns: Requires resolving users via
$web.EnsureUser()before assignment. You can’t pass a display name string and expect it to stick. - Attachments: Out of scope here. Bulk importing items with file attachments is a separate, more involved problem.
For anything involving complex column types or high-volume data governance, Microsoft’s SharePoint developer documentation covers the object model in depth.
Put It Into Production
The full script — param block, snap-in load, list connection, import loop with error logging, and proper disposal — is roughly 60 lines. No third-party tools. No GUI. No per-seat licensing. Works on any SP2016 farm where you have appropriate access.
Store it in your runbooks. Parameterize it fully so it works against any site or list without modification. If you have recurring data feeds from a line-of-business system, wire it to a scheduled task and let it run.
If you’re doing this as part of a broader SharePoint migration, data consolidation project, or just want someone else to run it reliably — get in touch with us at SSE. We do this kind of work regularly across managed environments.


