Friday, 21 May 2010

Exporting and importing SharePoint sites

A number of our users have SharePoint (WSS 3.0) sites hosted in another office and wanted to move the contents down to our local WSS 3.0 install. This was not as straightforward as you might imagine. We hit a number of gotchas and had to provide workarounds that are documented here so that others can benefit from our experiences.

Running out of disk space on the C: drive

When running an export using stsadm, we kept filling up the C: drive despite exporting to a separate drive. The reason for this is that SharePoint writes temporary files to the location defined by the %TMP% variable. This defaults to the C: drive!

To fix, open a command prompt and type:

set TMP=E:\Tmp

(replace E:\Tmp with the drive and folder you want to use for your temporary storage). Then run the stsadm export and it should work!

Commands to export and import a named site

The command we used to export the site was:

stsadm -o export -url http://old-sharepointserver/hostedsites/development -filename e:\ -includeusersecurity -versions 4 -overwrite

The above command will export the site called "development" referenced at http://old-sharepointserver/hostedsites/development to a file called The security information will be included in the export as will all versions of documents.

To import, the following command was used on the new server:

stsadm -o import -url http://new-sharepointserver/development -filename -includeusersecurity

Note that we are importing the site "development" into the top level and not as a subsite beneath hostedsites. If the name of the site is omitted, the top level site is overwritten!

The gotchas

When running the import, we received the following message:

"The file cannot be imported because its parent web <site path> does not exist"

This error is not helpful and for us the problem was permission related. We had used users (albeit domain admin accounts) to export and import the data that were different from the site collection administrators. To fix we had to do the following:

Make sure the site collection administrator is the same on both the source and destination servers.

When running the export and import, make sure you are running the stsadm commands as the site collection administrator. This ensures the permissions are aligned and the import should work.

stsadm Import error: The 'ASPXPageIndexMode' attribute is not declared

Not sure what the cause of this error is, but we found a fix online:

To get round it I edited C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\XML\DeploymentManifest.xsd on the destination server:

under section

   <!-- SPWeb definition -->

I added the following.

<xs:attribute name="ASPXPageIndexMode" type="xs:string" use="optional"></xs:attribute>
<xs:attribute name="NoCrawl" type="xs:boolean" use="optional"></xs:attribute>
<xs:attribute name="CacheAllSchema" type="xs:boolean" use="optional"> </xs:attribute>
<xs:attribute name="AllowAutomaticASPXPageIndexing" type="xs:boolean" use="optional"></xs:attribute>

With these gotchas overcome, we were able to successfully import the new site.

Wednesday, 5 May 2010

IBM pSeries (AIX) to Sun StorageTek 2540 - Part 2

The saga continues...

At the end of my previous port, the SAN LUN was being successfully seen by AIX as a single device using the Cambex driver.

With the multipathing fixed, it was time to build some WPARs. Everything went smoothly until we rebooted. At which point, hdisk10 was visible but I could no longer see the logical volumes on the disk. Furthermore, I couldn't activate the volume group I'd created "wparvg", getting the message:

bash-3.2# varyonvg wparvg
0516-013 varyonvg: The volume group cannot be varied on because there are no good copies of the descriptor area.

To cut a long story short (that primarily consists of me rebooting, removing the device in smit and running cfgmgr is various combinations), the Cambex install (/usr/lpp/cbxdpf) includes some useful commands. Running the dpfinfo command showed that hdisk10 was configured in the following way:

=== /usr/lpp/cbxdpf/dpfutil listall ===
# Device Active Standby
hdisk10 cbx1 (fscsi0 0x040200,1) cbx0 (fscsi0 0x030200,1)

This means that it's using path cbx1 as its active path, with cbx0 as the failover path. Some exploring with the dpfutil command showed it supports the following options:

dpfutil []
Commands may be abbreviated:
HELP - Display this message
LISTALL - List devices and path configuration
ACTIVATE [cbxN] - Manually switch virtual disk to path [cbxN]
VARYOFFLINE [cbxN] - Mark path [cbxN] unavailable
VARYONLINE [cbxN] - Mark path [cbxN] available
MARKFORDELETE [cbxN] - Force path off even if open (may crash)
LIST_HBAS - List HBAs with DPF paths
HBA_SET_WWN [cbxN] [no|yes] - Set WWN preferred path
TARGET_SET_WWN [cbxN] [yes|no] - Set target preferred path

I tried to manually switch over the paths:

bash-3.2# ./dpfutil activate cbx0
bash-3.2# ./dpfutil listall
# Device Active Standby
hdisk10 cbx0 (fscsi0 0x030200,1) cbx1 (fscsi0 0x040200,1)

With this done, I then tried the varyonvg again:

bash-3.2# varyonvg wparvg
bash-3.2# lsvg wparvg
VOLUME GROUP: wparvg VG IDENTIFIER: 00048ada0000d3000000012865034072
VG STATE: active PP SIZE: 256 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 999 (255744 megabytes)
MAX LVs: 256 FREE PPs: 979 (250624 megabytes)
LVs: 2 USED PPs: 20 (5120 megabytes)
OPEN LVs: 0 QUORUM: 2 (Enabled)
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size (Dynamic): 1024 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable


Not sure what this says about the failover capabilities of the driver... It appears that when the VG is active, manually failing over the paths works okay and the VG remains active.

Fortunately this isn't a mission critical production box (it's a development compile box for porting our code from Solaris to AIX).