Friday, October 20, 2017

[ATTENTION] Be careful with Copy-SPSite and Colligo!

Analysis of Copy-SPSite and Colligo

Folks be careful when using Colligo in your organization, careful with Copy-SPSite. Otherwise you may end-up with several issues regarding Colligo is unable to sync some sites:
According to the description provided by Microsoft Copy-SPSite:

Use the Copy-SPSite cmdlet to make a copy of a site collection from an implied source content database to a specified destination content database.The copy of the site collection has a new URL and a new SiteID. When you have database snapshot capabilities on a computer running SQL Server, a temporary snapshot of the source database is created for the duration of the copy to prevent any data changes during the copy process. If you do not have database snapshot capabilities on the server running SQL Server, you can back up the source and restore it to the destination to get the same result.

Copy-SPSite is creating a new site with new URL and a new SiteID, but the WebID is the same. The WebId of the original site and cloned one are the same. Yes that is true the WebID stay the same and this is what is confused for Colligo. Because Colligo relays on WebID as unique identifier.

How to get that? Here are the steps:

1. Find out which is the "Storage Location" for Colligo Briefcase. Go to View->Options
2. Go to this location, you will see "Sites.db" file:

3. Open the file with "DB Browser for SQL Lite", but first close Colligo and open Webs table.
Under Server Name is the WebID of the site and if you go to SharePoint server you will see it is the same GUID provided by Get-SPWeb. You can compare both WebID's of the sites, original ones and the one created with Copy-SPSite ... they are the same. Of course the logic that is used by Colligo is not allowing you to synchronize sites created with Copy-SPSite.

Interesting is that OneDrive for Business does not have this problem and synchronization is working fine. Why? Because Microsoft use a different logic for unique identifier. Normally the OneDrive for Business local cache access database is located here: C:\Users\Pavlov Aleksandar\AppData\Local\Microsoft\Office\15.0\OfficeFileCache\CentralTable.accdb. Open the DB in Access and than open MasterFile table.

As you can see the logic used by Microsoft for storing the information about the site is OneDrive for Business is GUID/relative path, in this case STID${CB935F8C-1D73-4BEC-B54E-D2E23D013CDC}/sites/loadtest22.

At the beginning to me it was also too strange to believe that WebID of two or more sites could be the same and in my opinion Microsoft changed this logic a while ago. But this may be a huge problem for companies using Colligo for file synchronization between SharePoint and local PC if somebody from Colligo do not pay attention to this problem.

Hope you will find this post helpful.

If so please share it.

Friday, June 9, 2017

Deploying new solutions in SharePoint 2013 takes too much time

One of my customers is having TEST, QA and PROD environment. Where QA and PROD are equal (same amount of WFE servers (in this case 2 WFE servers), same configuration and settings, in case PROD is down we can switch to QA in a minutes and it will be the same). 

What happened: since a while deploying a new version of the solutions takes too much time, around 1.5 hours, compared with the past and QA - 15 min. It is horrible to stay in front of the monitor and wait the slow deployment to finish. Yes, it finished without errors but waiting was killing me. I wasn't aware of any changes in infrastructure level that can lead to this slowness. 

I have to mention that there wasn't any slowness on the sites and locally on the server everything was working fine, no high memory or CPU consumption, nothing that could tell you way the deployment is so slow. 

In QA environment the deployment time was normal, around 15-20 min. So this also exclude the .wsp's from the suspected root cause.

Start looking for a solutions in internet and I was surprised how many article and post related to SharePoint slowness, sites are loading too slow and etc. but nothing related to my case. That's why I decided to write this post because I was able to find the root cause.

Incidentally I opened two RDP sessions to the WFE servers and size of the windows was different, guess what:

Can you see where is the problem from the picture above?

Yes, that is right. 

Both servers are having different time, the second server was 5 min behind the real time.

OK potentially I thought this could be the reason and went to check the time zones, but on both servers the time zones were correct, but not the time. After a while I checked the services and found out that "Windows Time" services on both machines were set to disable. Check how it is QA and there the services were running. Enabling the services and set them to automatic was the fix I needed.    

In the future if you have a slow deployment make sure you check the "Windows Time" service, set it to automatic and run it.

Hope you will find this post helpful, if so please share it.


Friday, April 21, 2017

Proactively increase the max site storage quota

With this post I would like to show you my way of proactively increasing the site storage quota. My task was to eliminate the need of users to request site storage quota increase. I wrote a small PowerShell script which you can download from here

The script will check the current site size, compared it with the max site storage quota assigned and if the current occupied storage is over 80% the site is a potential candidate for increase the max site storage quota. Of course you can change this value to whatever it is suitable for you. You will be able also the see the assigned quota template, if you have custom quota created or it will be shown as "No Template Applied". That means that "Individual" quota has been assigned, but you will still need to increase the assigned max storage size. The Read/Only sites are displayed in the report only if the warning quota is reached, in my case the warning quota is 1024MB, so you have to switch back to Write/Read before you change the quota. Inside the script if it is necessary you can change the warning quota limit (please read the hints/comments inside the script). 

The output looks like:

The output will be stored in a file and this file will be sent via email. Please check the settings in the script to understand to whom to send the report. Reports older than 2 days will be removed from the folder.

So give it a try and let me know if it works or not.


Monday, March 27, 2017

Could not retrieve profile schema from server

How many of you faced this strange behavior of SharePoint 2013 designer?

When trying to use User Profile as a Data Source in a workflow. Something similar to the screen shot below:

Try looking in google did not give me the expected solution. A lot of people suggested to grant permissions of the web app pool account to the search service database. But what actually that means? I granted read permissions to all search related databases, but as you expect the problem was still persisting.

So many articles were referring to Search I was start thinking that the problem is really there. But all of the explanations were not exactly correct. The right way to make it working is:
  1. Go to Central Admin
  2. Go to Manage Service Applications
  3. Select Search Service Application, but don't click on it
  4. Select Administrators
  5. Add the application pool account of the web application
  6. Grant "Read (Diagnostic Pages Only)" permission and click OK

Afterwards everything should work normally.

For now I don't have exact explanation why these settings are necessary. Will try to find out and post here as a comment.

Monday, December 19, 2016

[HOW TO] understand who deleted a SharePoint 2013 site

Very interesting task - How to find who deleted a site/sitecollection in SharePoint 2013. Few days ago I was tasked to restore a site, whcih was deleted 1 or 2 days ago. Easy task just found the site inside SPDeletedSite (Get-SPDeletedSite) and using Restore-SPDeletedSite was able to restore the site. But the next day again received the request to restore the same site, it has been again deleted and inside Get-SPDeletedSite information about who performed the deletion is not available.
I decided that I have to find out who deleted the site. So I downloaded the IIS logs (from SPDeletedSite object you can see the date the site has been deleted) for the coresponding day and start analyzing them. But the logs are too big.
And if you try to open them with Notepad or Notepad++ this is not going to work. So what is next? Fortunatelly there is LogParser. LogParser is the best IIS logs analyzing tool for every SharePoint administrator. So spare enough time and play with LogParser because it will definitelly save you time and will help you a lot in your day-to-day work and of course you will impress your manager with the capabilities offered by LogParser.
Then I have to find out how was the site deleted, in order to build LogParser query so I can execute it over IIS log files. They are 2 different ways to delete a site:
  1. From the Site Collection setting menu (deleteweb.aspx) or
  2. from Central administration (delsite.aspx)
In case I don't know the way the site has been deleted I build two LogParser queries:
  • LogParser.exe -i:W3C -o:W3C "select date,time, s-sitename, s-computername, s-ip, cs-method, cs-uri-stem, cs-username into 'C:\Tools\deletedSite.txt' from 'C:\Tools\u_ex*.log' where cs-uri-stem like '%delsite%'"
  • LogParser.exe -i:W3C -o:W3C "select date,time, s-sitename, s-computername, s-ip, cs-method, cs-uri-stem, cs-username into 'C:\Tools\deletedSite.txt' from 'C:\Tools\u_ex*.log' where cs-uri-stem like '%deleteweb%'"
I my case site has been deleted from the Site Setting menu (deleteweb.aspx) and the rusult is as follow:

And if you pay attention to the DeletionTime (from SPDeletedSite) and Date, Time form IIS logs you will see that we have an exact match, and they are no other entries for deleteweb.aspx in IIS logs. So this proves that this user deleted the site and he/she deleted it from the Site Settings menue (because of the deleteweb.aspx).
Hope this will help someone if so share this post or comment it.
And definitelly this is the last post from me for this year.
Enjoy the holidays and see you next year 2017.

Tuesday, October 18, 2016

Windows 10 Data DeDuplication after Anniversary update

This could be one of my last posts for this year.

One post out of the SharePoint world, but closely related. I'm using Windows 10 on my laptop (with 32 GB of RAM) where my virtual test machines (SharePoint 2013 and 2016) are running. Pretty usual environment for every SharePoint enthusiast.  And most of you know the problem with the speed (performance) when 3 VM's are running on a laptop. That's why I bought 2 additional SSD's (120 GB each) and put them in my laptop. So my OS is running on a SSD and I have one SSD (120 GB) and one HDD (1 TB) for SharePoint VM's playground, but this space, on the SSD drive is insufficient.

Most of you already know about Data DeDuplication really good feature which allows you to store more data than usually you can. But I' not going to explain here what is DeDulication and how it works, if you are interested you can check in google.

I'm very happy with this feature/technology even if it is not released for home usage. Officially it available for MS server products, but there is a way to install/activate it on your local Windows 10.

One day my machine needs to install some updates and I was shocked afterwards because my SSD drive, where the DeDuplication was enabled, was not accessible anymore. OK I was accept the fact that I'll lose the data on the disk and had to format it. But definitely I need this feature and start searching for a solution, unfortunately several days without success, tried a lot of suggestions, but none of them was working, pretty sad.

Fortunately I post a question to MSDN forum and with a lot of reading could find a solution. You just need to download these files from here (not working any more) and install them on your machine. then you will be able to enable DeDuplication on a drive which you want. My machine had installed Windows 10 Pro version 1607 OS build 14393.222.

So try if you want Data DeDulication feature on your Windows 10, let me know if it works for you or not and share if you think it could be helpful for others.

Enjoy the holidays, wish you all the best to you and your family. 
Hope to see you soon in the next 2017 year!

Tuesday, July 12, 2016

[SOLVED] Missing server side dependencies - [MissingAssembly]

Maybe most of you had seen this Health Analyzer Rule error:
[MissingAssembly] Assembly [WebProvisionedEvent, Version=, Culture=neutral, PublicKeyToken=a505ed585a006cfc] is referenced in the database [WSS_Content_05], but is not installed on the current farm. Please install any feature/solution which contains this assembly. One or more assemblies are referenced in the database [WSS_Content_05], but are not installed on the current farm. Please install any feature or solution which contains these assemblies.

It happened to me after the farm solution has been redesigned and the new solution has been redeployed. If you search in Google for this error you will find a lot of suggestions how to fix it, but first of all you have to understand what exactly is causing this error and here I'll provide you my approach, afterwards you will be able to solve the issue without breaking your SharePoint 2013 farm. In my case it happened on SharePoint 2013 customer farm, but most probably the approach will work on SharePoint 2010.

First try to find what is inside the solution you deployed, simply unzip it with 7-Zip and look inside. Try to find which feature is related to this WebProvisionedEvent, in most of the cases EventReceiver is involved (as it was in my case). And because in the Content Database the path to the assembly was different (the one left from the old solution, before redesign). Thinking logically it make sens to reactivate the feature, so in the content database the path to the new .dll will be recorded. Sounds good and this should be the first step - reactivate the feature related to this EventReceiver. Some of the errors in the Health Analyzer will disappear, but not all of them. And this is normal because event receivers attached to the site, list or library are not removed when the feature is deactivated. And you need to remove them manually or with Power Shell.

I found the following script very helpful, but there is also a problem with the formatting and when you tried to import the script and use it you will get an error and maybe you can decide that the script is not working. What you need to do: Type the script manually in your text editor and save it as .ps1. Because as I already mentioned you try to copy and paste the script in your text editor or Power Shell ISE you may get an error due to a different formatting. It is  a bit long, but in order to work write the script manually in text editor.

OK, so after I manged to find out what was wrong with the code I was thinking YES finally I'll fix and get rid of this error in Health Analyzer. But unfortunately after I list which event receivers were left (using the "-ReportOnly" switch from the script) and deleted all of them afterwards, THE ERROR IN HEALTH ANALYZER WAS STILL THERE. And it was pointing to the same content database as it was from the begging.

After hours spending in troubleshooting and researching this error I was luckily able to find out that maybe the "old" event receivers exists in sites inside the recycle bin. I'm talking about Deleted Sites. And the aforementioned script is not removing the event receivers from sites which are inside the recycle bin (you can get a list of these sites with Get-SPDeletedSites). I was read somewhere in Google that if you remove these sites the entries in content database may left and then the only way to get rid of these event receivers is to manually remove them from content database.

So my approach was:
  1. Export the list of these deleted sites (Get-SPDeletedSite) 
  2. Restore them (Restore-SPDeletedSite) 
  3. Reactivate the "faulty" feature 
  4. Check if the issue is solved in Health Analyzer
  5. If not remove them with the script I already mentioned. 
For sure if you follow these steps the error will disappear afterwards. Then you can decide if you are going to Remove the sites you just restored or will left them as they are.

Let me know if try this approach, if it is working or not and share if you think it was helpful.

Link to the script uploaded in TechNet.

Happy :-)