Sue Hernandez's Blog

October 3, 2011

SPListItem.DoesUserHavePermissions doesn’t work…or does it?

Filed under: Best Practices, Custom Web Parts, MOSS, Visual Studio 2010 — Susan Hernandez @ 8:48 pm

Don’t you hate when you make dumb – and upon later inspection, rather obvious – mistakes?

I was struggling (I’m too embarrassed to tell you how many hours) with the fact that even though I had broken permission inheritance, my code was always saying that the Current User had access to the list item – and full access at that (that should have been my first clue).

bool hasPermissions = false;
try
{
       //http://blogs.msdn.com/b/ryanrogers/archive/2004/07/15/184594.aspx
       site.CatchAccessDeniedException = false;

       hasPermissions = item.DoesUserHavePermissions(SPBasePermissions.ViewListItems);
}
catch (Exception ex2)
{
       hasPermissions = false;
}
finally
{
       site.CatchAccessDeniedException = true;
}

if (!hasPermissions)
{
       ShowError("User does not have permissions to list item");
       return;
}

So I thought the above code would work.  From everything I could see on the web, it SHOULD have worked.  So why didn’t it?

Turns out that I was running in Elevated Privileges mode, so the “Current User” was the system account.  Now mind you, that code above was NOT inside that Elevated Privileges wrapper – that’s what threw me I think.  But, what I did do (and I’m questioning now WHY did I do it this way) is that I had an SPWeb object variable that I set inside the RunWithElevatedPrivillges method.

SPList list = null;
SPWeb web = null;
SPSite site = null;
try
{
       SPSecurity.RunWithElevatedPrivileges(delegate()
       {
              using (SPSite secureSite = new SPSite(webURL))
              {
                     site = secureSite;
                     using (SPWeb secureWeb = secureSite.OpenWeb())
                     {
                           web = secureWeb;
                     }
              }
       });
}
catch
{
       ShowError("Could not retrieve web at '" + webURL + "'");
       return;
}

try
{
       list = web.Lists[listID];
}
catch
{
       ShowError("List ID '" + strListID + "' not found in web");
       return;
}

try
{
       SPListItem item = list.GetItemById(itemID);

So apparently the web object stayed elevated, thus the list was elevated, thus the List Item was elevated.  All I had to do was use the overload of the DoesUserHavePermissions method.  But that being said, I’m rethinking this whole, “pull it out of the elevated wrapper” thing I have going on here.  I’m marking this post as “Best Practices” as in what NOT to do :-).

hasPermissions = item.DoesUserHavePermissions(SPContext.Current.Web.CurrentUser, SPBasePermissions.ViewListItems);

January 12, 2011

SharePoint Google Geospacial Dashboard Solution – Lessons Learned

Filed under: AJAX, Custom Web Parts, Google Maps API V3, JQuery, MOSS, Web Part Pages — Susan Hernandez @ 11:47 pm
Geospacial Dashboard Solution

Geospacial Dashboard Solution - CLICK FOR FULL PICTURE

 

So I have spent some time on my SharePoint Geospacial Dashboard Solution.  It consists of 3 web parts:

  • Main Map web part with optional Related Information section at bottom
  • Cluster Information and Filter web part
  • Details web part for individual Location metadata and KPI information.

As this was created for my client, I can’t post the source code here.  However I can discuss some things I ran across while I was creating this.  But first a word on what it is and how it works.  As an aside, configuration of this solution is housed in SharePoint Lists and Web Part Properties.  Data feeds are served up by the CorasWorks Data Integration Toolset (only as a means of normalizing data input from multiple types of sources).

Overall Concept
The main concept is that you have a data set of entities – Airports, Facilities, Buildings, Servers, etc. – that contain Latitude and Longitude coordinates so that they can be plotted.  Now for each one of these “Locations,” as we’ll call them, you have data surrounding that Location that represents some kind of “Status” or “Health” of that Location.  For Airports, you might want to track the percentage of Late Flights at that Location; for Buildings, you might want to track unresolved Help Desk Tickets entered from that Location; for Servers, you might want to track Service Contracts that are expiring in a certain amount of days, or Percentage of Uptime for the applications on that server.

 In any case, anything that you want to track, that helps determine the overall Health of that site, can be used as a KPI, as long as you have an available data source that relates the Location’s Primary Key, or “Location Key,” as we’ll call it, to the actual numeric value for that KPI at that given time.

Now this solution was built to surface between 30,000 and 100,000 individual data points.  Showing that may Locations on a map is quite rediculous, and results in information overload, so we have added some functionality to help alleviate that.

Point Clustering
The first of those concepts is the idea of “Clustering” individual points within a certain proximity of one another into one bigger point, thus only having between 15 – 30 Clusters on the map at any given point.  We added the ability to click on the cluster and get a typical Google Info Window with a link in it – that link brings up the “Cluster Information Panel” with some details and a list of all of the individual locations contained within that cluster.

Configurable Metadata (Details web part)
When you bring up the Cluster Information panel, one of the icons that is next to each Location information is a link to bring up a Details Panel.  The Details Panel shows you the Location Key and Title of the Location, the current KPI Values for each KPI defined, Metadata from the master Geo DataSet (like Address, City, St, Zip), and Metadata from each of the KPI data sets.  We made it so that what is served up for the Details panel is actually just XML, so the user of the site (the administrator configuring the site, actually), defines an XSLT file that determines the display of the information in the Details Panel.

Configurable Filtering and Searching
One of the ways that the user is able to “find the needle in the haystack” is to use both Filtering and Searching.  The administrator configuring the site determines which columns from the data sources should be used to Filter, and to Search.  When you bring up the Filter tab, it brings up a cached list of every possible value from every column that was configured as a filter column.  You place checkboxes next to the values you want to include – for instance you can filter on ((Region = South OR West) AND (Facility Type = Building OR Vehicle)).  In addition to this filtering, you can perform a wildcard search on one of the columns configured, such as (Title = Water*).

Background Refreshing
As part of the configuration, the administrator can set up intervals of when the cached data should be refreshed from its source.  The data refreshes in the background, and only refreshes the map when it’s done re-loading the data and the map points.

Related Information
Along with metadata about a location or about a KPI’s value, you might have documents, drawings, schematics, etc. that belong to that Location.  The set of tabs displays at the bottom of the map, and when you click on the Details of an individual location, the Related Information tabs below filter out the documents to just the one selected.

Other Stuff
Just for fun, I also added the ability to add a Skin around the outside of the map.

  

THE LESSONS LEARNED

OK so enough of the sales pitch on what it does :-).  I learned a lot while creating this solution, mostly about AJAX, because I had never actually implemented Update Panels before. 



Working with the Client-Side from the Server / RegisterClientScriptBlock
As you know, working with Google Maps API v3 is all client-side.  So 75% of what my web parts do is just to emit the right Javascript at the right time.  I used to be in the habit of putting the Javascript just in a LiteralControl and adding that to the page.  This time, however, I started using ScriptManager.RegisterClientScriptBlock() like I’m supposed to. 

There’s nothing too special about this, especially for those of you who are familiar with it already.  However, I just wanted to point out that I was also able to add Javascript to the page on the fly after an AJAX Update, simply by making sure that the Control I used as the first argument to the function was the UpdatePanel, and the second argument was the GetType() of the UpdatePanel.  Every time the page did an Asynchronous Postback, the script that was added dynamically from the last Async postback disappeared, like I wanted it to, so it didn’t run again.

The STUPID little Progress Indicator
This was the BIGGEST pain in the you-know-where:  adding a stupid little Progress Indicator for the ASYNCHRONOUS postbacks.  Since my async postbacks took up to 30 seconds (hey, what do you expect with 100,000 data points?), I wanted to provide a way for the user to know how far we were in the process.

Well at first, all my code was in a common Class file that was used between my 3 web parts.  The progress indicator, however, had to be Client-side code running every second, checking the status on the Server, for how far it was in the process.  Well my first hurdle was, that I couldn’t call the code in the Class file from the Client unless I turned it into a full blown SharePoint Custom Web Service.  There was something that I can’t quite remember about PageMethods and “EnablePageMethods=’true'” but something about the Master Pages in SharePoint made it not work.

So what I did, was I moved all my crunching code into a Web Service file – BUT – I only decorated the one Method that gets the status with the [WebMethod] attribute.  I figured, hey, I could start the crunching process, and inside the crunching process I could increment a class instance variable that holds the progress information.

Well I had 2 problems with that.  The first problem I’ll discuss in this next segment below, but after I figured that out, I still had to deal with the second problem.  Once I got the right class instance, and asked for the percentage complete, I was only getting either 0 or 100.  It wouldn’t update the variable in the middle of its processing the crunching.  So I had to actually raise a custom event called ProgressHasChanged and update the variable in the event handler for that, so that the crunching wouldn’t get in the way of the variable updating.

AJAX and the Page Lifecycle and a “Class Factory?”
The first problem I had, though, was I kept losing my instance of the class.  So I found out here (http://encosia.com/2007/10/24/are-you-making-these-3-common-aspnet-ajax-mistakes/) that I was making the common newbie mistake – I didn’t know the async postback was making my whole web part’s event lifecycle fire all over again.

So what I decided to do was the Sue version of a Class Factory – I call it the Sue version, because I still don’t really have a grasp on what a Class Factory is supposed to be and how it’s supposed to be implemented.  So here’s what I did = when my web part first loads, it asks the class to give me an existing instance of the class if you have it, or a new one if you don’t.  What the method does, is that in the class there’s a static Dictionary holding instances of the class by InstanceID – some unique identifier I came up with which was a combo of the base 64 of the page URL along with the guid for the web part.  So it creates a new instance of the class and passes it in to the dictionary.  On subsequent async postbacks, it gets the same instance back again and uses that.

The big challenge was how to get rid of the instance when the page was done.  So after some trial-and-error, I used the client-side window.unload event to launch a JQuery Ajax call back to the server to remove that instance with that ID.

Using Javascript to call an AJAX, Server-Side Update
So when the Map’s zoom level changed, or when you panned far enough, I had to call back to the server to run the async function to re-load the points given that new zoom level and map bounds.  So I needed a way for Javascript to call a Server Side function.

That turned out to be relatively straight-forward.  What I did, was for each Update Panel (I had several), I added a hidden HTML div.  Inside that DIV was 2 things – a set of text boxes that held Argument values to be used in the server side function, and a button that would be used to actually trigger the async postback.

So one of the things the web part did was to emit the Javascript function that would be called to do this.  Inside that Javascript function was setting all the arguments to the right values – I just used the ClientID of the textbox that I hid in the Div to get the object, and then just set the value.  After I was done setting all the argument variables, and any other periferal client-side stuff I had to do, I manually called __doPostBack on the UniqueID of the button that I hid in the Div, inside that UpdatePanel.

RegisterAsyncPostBackControl and UpdatePanel.Update()
One of the wierd things I encountered, however, was that setting ChildrenAsTriggers = true for the UpdatePanel didn’t work.  I had to manually register each Button control that I wanted to use to post back with, by using scriptManager.RegisterAsyncPostBackControl.

The other wierd thing, was that in the event handler that ran when that button was triggered, the content inside the UpdatePanel didn’t actually refresh on the client unless I put updatePanel.Update() immediately at the end of the function.

Loading XML Files from secure SharePoint using XmlTextReader and XmlUrlResolver
Switching topics, back in the class that was doing all the crunching, I had to load in the data sources when I went to refresh the Cache.  Well the way we set it up was that we used the CorasWorks Data Integration Toolset to sort of “normalize” how the data was coming into the program, whether it came from SharePoint, SQL Server, Web Services, or a flat file.  The way the DIT worked was that you put a certain type of web part on to a Web Part Page; it retrieved its data on the back end and then replaced the output of the Web Part Page with just an XML stream, like as if you were just reading an XML file.

So I had to figure out how to best and quickest read an XML file from the web and stick it into a DataSet object.

I checked around and what seemed to work best in my scenario was using the XmlTextReader which took an argument of a URL to the file.  However, because we were over HTTPS, I had to make sure that I added an XmlUrlResolver and passed it the NetworkCredentials, and set the resolver into the XmlTextReader.  After that was set, I just called DataSet.ReadXml(xmlTextReader, XmlReadMode.ReadSchema).

SQLBulkCopy
Last but not least, I had large data sets with updates, because I’d have to refresh the data cache from its source files.  I didn’t want to iterate through each row and update SQL that way, I figured there would have to be a better way, and there was.

The SqlBulkCopy object takes in a Data Table and just does as it says – it bulk writes all the changes in the data table into SQL Server.  The only thing I had to worry about was adding bulkCopy.ColumnMappings for each column I wanted to add in there, as my data table structure was slightly different than my Cache table’s structure.

That’s it.  I hope some of you can benefit by the tips here, and if any of you ran into better ways of doing things, drop me a comment.

Thanks

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.