Understanding the Purchase Request fields and how they are calculated

Before using the Purchase Request module it is important to understand some of the more obscure fields and how the totals are calculated. So the purpose of this post is to do just that:

Sub Total

The Sub Total is surprisingly enough quite straightforward. The code for it is:


So essentially, all that is happening here is that we are summing all the prices of the active Purchasing Items for this Purchase Request.


The simplicity of the Sub Total field is not however mirrored in the Total field. The code for it is:


So what is going on here? Well the key lies in the CI Type module (formerly known as Inventory Catalog). Each Purchasing Item (that we are summing) is of a certain CI Type and each CI Type has a flag that determines if that CI Type is Taxable or not. Now depending on whether the Purchasing Item is of a taxable CI Type or not determines whether the Tax field (in the Purchase Request module) gets added on or not. So let’s say you have a Purchase Request with two line items each costing $100. One of the line items is taxable (as determined by the CI Type) and the other is not, and you have set a tax value of 9%. The Total would be:

(100 * 1.09) + (100) = $209 + Shipping (Not Taxable)

This is rather clever but if you don’t know about it you can have a lot of head scratching trying to understand why your Totals don’t match what you think they should.

There is a significant gotcha with this system. Namely, this field is Nullable which means that if you happen to remove the field from the CI Type form then when you save a CI Type, that field is left as Null. Now, when the Purchase Request Total field attempts to sum values and returns a Null for Taxable, the entire Purchase Total becomes 0 – so whoever made that decision should be shot!


With an understanding of how these fields are calculated and the little gotchas that go along the way you should hopefully be in a better position when you are implementing/understanding Purchasing in Service Desk Express. I have included so links to other relevant articles on Purchasing below. As always, hope this has been helpful. Any comments, positive or negative, are incredibly welcome.

Related Posts:

Forward Schedule of Change V2.0

Version 2.0 of the Forward Schedule of Change application is nearing completion and should be available before the end of the month. Some of the changes that have been incorporated include:

  • Full screen change calendar with change details on separate page.
  • Change details now include Change History, Change Assessments and Change Approvals (visibility of which can be toggled on and off by the application administrator).
  • Links to update changes (visibility of which can be toggled on and off by the application administrator).
  • New search functionality allowing the searching of selected fields.

The new layout, removing the change details panel on the right hand side of the application, will not be to everyone’s liking and as such V1.0 will still be available if organisations prefer the old style.

As always, feedback (positive or negative) is always welcome.

HTML Emails in SDE

A lot of people ask about sending email notifications from SDE in HTML format.  This is a great way to make your notifications consistent and easy to read for your users.  It also works very well for including links in the email for your users to click.  There aren’t too many email clients that won’t display HTML emails these days, although I have in the past had slight problems with Lotus Notes where tables don’t display exactly how you expect.  It is not a show-stopper and usually can be remedied with some minor tweaking specific to the way Notes handles HTML.  Also, certain versions of BlackBerry devices do not display HTML at all, or not very well – especially HTML links.

Business Rules Templates used in conjunction with some simple HTML and Cascading Style Sheets will make the notifications look very professional and consistent.   At the end of this post I will provide some downloadable Business Rules Templates that you can use in your own rules and tweak as needed.  Let’s dive right in and get started.

The basic requirement for having emails sent by SDE display in HTML format is to include the <HTML> and </html> tags around the body of the notification.  HTML formatting in email subjects is not supported by email clients, however.  Here is an example of a simple Incident notification in HTML format:

Incident #{TR,Incident #} has been created for you.<BR>
<B>Open Date & Time:</b> {TR,Open Date & Time}<BR>
<B>Category:</b> {TR,Subject Description}<BR>
<B>Urgency:</b> {TR,Urgency ID:}

Remember to always use the HTML line break tag <BR> rather than just pressing enter in the notification.

The above is a very basic example of how to use HTML in notifications.  However, what if you have multiple Incident notifications and want to use the same format in all of them?  You could put the HTML into each notification body, but then if you ever want to make changes you are stuck modifying multiple rules.   This is where Business Rules Templates come into play.  Here is the template I use at the beginning of all of my notifications – I name it _HTML_HEAD:

<STYLE type=”text/css”>
border:1px solid #000;
padding:.2em .5em;
tbody td{
tbody th{

This provides very nice formatting for including information in a table format.  I create an _HTML_FOOT template also:

<BR><CENTER><HR width=”80%” size=”1″ noshade>
<FONT color=”silver” size=”1″>{EVENTMODULE} {EVENTMETHOD} / {RULENAME}</font>

Putting this at the end of every notification will provide you with information for troubleshooting purposes, including information about the event and Business Rule that generated the notification.  Note that the module you choose for the HTML templates does not matter.  Since no actual fields from any module are used in them, you can use them for notifications based on Incidents, Work Orders, Change Requests, and any other module.

Here is the standard Incident template I use for notifications, which I name _INCIDENT:

<TABLE width=”80%”>
<TH width=”150″>Incident #</th>
<TD>{TR,Incident #}</td>
<TD>{TR,Open Date & Time}</td>
<TD>{TR,Client ID} – {TR,First Name} {TR,Last Name}</td>
<TD>{TR,Cl.Phone #}</td>
<TD>{EXTRACT,{},{TR,Client Email}}</td>
<TD>{TR,Company Name}</td>
<TD>{TR,Department Name}</td>
<TD>{TR,Subject Description}</td>
<TD>{TR,Urgency ID:}</td>
<TH>Assigned Staff</th>
<TD>{TR,Login ID Assigned To} – {TR,First Name Assigned To} {TR,Last Name Assigned To}</td>
<TH>Assigned Group</th>
<TD>{TR,Group Name}</td>
<TH colspan=”2″>Description</th>
<TD colspan=”2″><PRE>{TR,Incident Description}</pre></td>

Notice I use the <TH> and <TD> table tags in conjunction with the style sheet to control the formatting.  Putting these templates together into a notification is very simple.  I usually include descriptive text unique to each rule so that the person receiving the email will understand why it was sent to them:

Incident Notification with HTML and templates

Here is how the notification looks to the person receiving it:

Incident Notification Result

With SDE’s ability to generate a link that takes a Staff member to the application and opens a particular record, it is common to include the links in email notifications.  However, you don’t want the Clients seeing these links because they won’t work.  Because of this, I create separate templates for the links.  Here is the template I use for Incident links, appropriately named _INCIDENT_LINKS:

<A HREF=”{LINK,Incident,{TR,Incident #},}”>Click here</a> to view this Incident in SDE.

This is so simple it seems like using a template for this is overkill.  Again, remember that you may include this template in many rules and if you ever want to make a change to it you’ll only have to make that change to the template.  My template _WORKORDER_LINKS also includes a link to the parent Incident (if one exists).  This is an example of the fact that MATH statements, and of course DB Lookups, can be used in templates as well.

<A HREF=”{LINK,Work Orders,{TR,Work Order #},}”>Click here</a> to view the Work Order in SDE.<BR>
{MATH,(CASE WHEN ‘{TR,Seq.HD #}’ <> ” THEN ‘<A HREF=”{LINK,Incident,{TR,Seq.HD #},}”>Click here</a> to view the parent Incident in SDE.’ ELSE ” END)}

The above template will always show a link to the Work Order, but the MATH statement will only display the link to the parent Incident if there is one.

The last aspect of templates I want to discuss is the context of the Business Rule and template.  This is very important.  As an example, suppose you want to trigger a Business Rule from the creation of an Incident Detail with a particular Action ID.  If you want to include the Incident template in a notification from this rule the problem is that you can’t just use {TR} fields from the Incident – the context of the rule is Incident Details, not Incidents.  For this reason, I create a template named _INCIDENT_LOOKUPS:

<TABLE width=”80%”>
<TH width=”150″>Incident #</th>
<TD>{TR,Incident #}</td>
<TD>{DB,Incident,Open Date & Time,”Incident #” = {TR,Incident #}}</td>
<TD>{DB,Incident,Client ID,”Incident #” = {TR,Incident #}} – {DB,Incident,First Name,”Incident #” = {TR,Incident #}} {DB,Incident,Last Name,”Incident #” = {TR,Incident #}}</td>
<TD>{DB,Incident,Cl.Phone #,”Incident #” = {TR,Incident #}}</td>
<TD>{EXTRACT,{},{DB,Incident,Client Email,”Incident #” = {TR,Incident #}}}</td>
<TD>{DB,Incident,Company Name,”Incident #” = {TR,Incident #}}</td>
<TD>{DB,Incident,Department Name,”Incident #” = {TR,Incident #}}</td>
<TD>{DB,Incident,Subject Description,”Incident #” = {TR,Incident #}}</td>
<TD>{DB,Incident,Urgency ID:,”Incident #” = {TR,Incident #}}</td>
<TH>Assigned Staff</th>
<TD>{DB,Incident,Login ID Assigned To,”Incident #” = {TR,Incident #}} – {DB,Incident,First Name Assigned To,”Incident #” = {TR,Incident #}} {DB,Incident,Last Name Assigned To,”Incident #” = {TR,Incident #}}</td>
<TH>Assigned Group</th>
<TD>{DB,Incident,Group Name,”Incident #” = {TR,Incident #}}</td>
<TH colspan=”2″>Description</th>
<TD colspan=”2″><PRE>{DB,Incident,Incident Description,”Incident #” = {TR,Incident #}}</pre></td>

As you can see, this template uses the Incident # to lookup all the Incident fields.  It is simply a coincidence that the foreign key in the Incident Details module is called Incident #.  This is, however, a useful coincidence.  If you are doing time-based rules where the notification is delayed, you always want to use DB Lookups to get the most current data when the notification is sent.  Since {TR,Incident #} is a valid field in the Incident module, you can use this lookup template in those notifications.

Finally, here is a link to a set of text files that contain the basic templates I like to use.   You can modify them to suit your needs, of course.  Keep in mind that in the Change Approval template I have used some fields that aren’t available in the module by default.  You’ll either have to remove them from the template, change them to DB Lookups, or add them as virtual fields to the Change Approval module (which is what I do).

Downloadable Business Rules Templates

CI Assemblies/Configurations – How To Use Them – Part 2

In part 1 of this series (http://www.joatit.com/wordpress/?p=539) I described the relationships and use of the modules associated with the CI Assemblies/Configurations module. I also described how I would implement a real-world example using these modules and eluded to the need to automate some functionality through business rules and be aware of certain issues when it comes to reporting.

In this post, I want to cover off those points and hopefully provide some additional resources to help you get the most out of the product

Business Process Automation

In the first post in this series I suggested it would be a sensible idea to have a CI Assembly/Configuration for each client to manage their mobile assets. Fine, but what a pain in the proverbial if for every client you create you have to manually create a CI Assembly/Configuration and then link it to that client! At this point I would normally suggest a business rule but unfortunately we have a problem – namely Service Desk Express does not allow you to create business rules based on the Assembly Clients module which would mean we could create the CI Assembly/Configuration but couldn’t link it to the client. So we need an alternative – enter the Integration Console:

CC – Create and Link CI Assemblies to Clients (click here to download).

What we need to do to achieve this is create a package that contains a step and a substep:



We will set the Initiator to be a schedule running every 15 minutes (as shown below) such that within 15 minutes of creating a new account your CI Assembly/Configuration will be created and linked.



The key to this package is the source for the Create CI Assemblies Step. We need to run a SQL query to only return the clients that don’t have a matching CI Assembly/Configuration so we need to use the ODBC adapter type as shown below:


using the SQL query:



We then map the target to the CI Assembly view using the SDE adapter type as shown below:


So that creates the CI Assemblies/Configurations for the Clients – now we need the sub step to link them to each other.


As can be seen below we only want the ub step to run if the parent succeeds:



None/Not used.


Finally we map the target to the Assembly Clients view using the SDE adapter type as shown below:


This will create and link a CI Assembly for every client that does not already have one based on matching the Client ID with CI Assembly ID  as mentioned in my previous post.

Orphaned Configurations and CSBRs

I wrote a post a while back on how to warn users if the CI Assembly/Configuration they are saving is NOT linked to a client. I would also suggest that you implement this business rule as well.


Now if you have implemented something similar to what I suggested in the previous post there are a couple of issues relating to reporting that you need to be aware of. Specifically, if management ask for a list of all assets and their users you need to be very careful how you handle the many to many relationship between Clients and CI Assemblies otherwise you could end up with duplicate assets in the cases when an asset is used by multiple clients. So what are your options:

Exclude duplicates

Your first option is to exclude these circumstances altogether. Let me explain:

SELECT II.[Sequence], II.[Asset/Tag #], II.[Serial #], II.[Category ID], II.[Part Description], CC1.[Client ID], CC1.[Configuration ID]
FROM [_SMDBA_].[Inventory Items] II LEFT JOIN [_SMDBA_].[Configurations] C ON C.[Sequence] = II.[Seq.Configuration]
LEFT JOIN [_SMDBA_].[Configurations Clients] CC1 ON CC1.[Seq.Configuration] = C.[Sequence]
WHERE II.[InActive:] = 0 AND (SELECT COUNT(*) FROM  [_SMDBA_].[Configurations Clients] CC2 WHERE CC2.[Seq.Configuration] = CC1.[Seq.Configuration]) < 2

The advantage of this is that your report is accurate in terms of assets as long as you appreciate that your assets do not include shared assets. The disadvantages are pretty obvious in that gaining a holistic view of your assets using this method is almost impossible and as such I would not recommend it.

Show first user only

An alternative solution is to only show the asset once and assume that the first person sharing an asset is the person responsible for it. This is done using an embedded SELECT statement in your report:

SELECT II.[Sequence], II.[Asset/Tag #], II.[Serial #], II.[Category ID], II.[Part Description],
(SELECT TOP 1 CC.[Client ID] FROM [_SMDBA_].[Configurations Clients] CC WHERE II.[Seq.Configuration] = CC.[Seq.Configuration]) As 'Client ID',
(SELECT TOP 1 CC.[Configuration ID] FROM [_SMDBA_].[Configurations Clients] CC WHERE II.[Seq.Configuration] = CC.[Seq.Configuration]) As 'Configuration ID'
FROM [_SMDBA_].[Inventory Items] II
WHERE II.[InActive:] = 0

The big advantages here are that there is no customisation required and critically the correct number of assets are shown on report (including shared assets). The big disadvantage is that it makes a very large and often incorrect assumption – i.e. that the first person sharing an asset is the person responsible for it. This simple isn’t true in many circumstances and if you are using this information for billing then you’re potentially opening up a whole can of worms. Again, not my preferred option.

Add Person Responsible field and CSBR

My preferred option is to add a new field (Seq.Responsible) to the Inventory Item/Configuration Item module – a foreign key to the Client module (obviously with its relevant virtual fields) that is used to store the Person Responsible for the asset as shown below:


Add this field on to your Inventory Item/Configuration Item form and you can then create a simple CSBR that on change of Configuration/CI Assembly, prefill the Person Responsible with the first person linked to that Configuration/CI Assembly. The massive advantage here is that although the person is prefilled you have the ability to overrule the suggestion. What is more you can even select a departmental manager who does not use the shared asset in question. Now you have the ability to report directly off the Inventory Item/Configuration Item module giving you accurate asset information that CAN be used for billing without arguement.

CSBR: CI – Prefill Responsible Client On Change Of CI Assembly

When On Data Change occurs for field CI Assembly ID where {TR,CI Assembly ID} NotEqual <Blank>
Populate Current Form - JQS - Configuration Item (the new form with the Seq.Responsible field added) setting Seq.Responsible = {DB,Assembly Clients,Seq.Client,"Seq.CIAssembly" = {TR,Seq.CIAssembly} DB}
Assigned To JQS - Configuration Item


CI Assemblies provide a highly flexible solution to the management of assets in Service Desk Express but it is essential that robust business processes are put in place to combat the risk of poor data. As always comments, positive or negative, are always welcome.

Restoring Service Desk Express from one server to another

I can’t imagine there are many organisations that use BMC Service Desk Express but don’t have a development / test environment. Personally, I always prefer to have three environments:

  • Development: Where I am trying out new concepts, customisations, business rules etc.
  • Test/Training: A permanant training environment for new staff and a staging area where new customisations can be tested and approved by the business.
  • Production

Now in an ideal world your Production and Test/Training environment would be either identical or, in the case of just prior to a new release, slightly out of sync. This is normally achieved by restoring a backup of the production database over the test/training database. Whilst this is relatively easy if the Production and Test/Training environments are on the same physical network, this can become slightly more involved if this is not the case. What I want to do with this post is provide a simple backup and restore procedure that works in any SQL Server environment.


This is the easy bit – the only catch is that frequently your Test/Training environment doesn’t have the disk capacity of your production SQL server and, due to the way that Service Desk Express polls the database, the transaction logs of an SDE database can be huge! So assuming that your DB is called SDE and you are using SQL Server 2005 your code would look something like:


where YYYY is the year, MM is the month, DD is the day and x is simply an incrementing number.


Once we get our backup from one server to the other we then need to restore it. Restoring a normal database is easy – restoring an Service Desk Express database is also easy but unfortunately requires a couple of preparation steps:


Service Desk Express uses a number of Windows Services to perform essential background tasks and these services connect to the database to perform their function. You can’t restore a database without first closing the connections to it. For this reason, and only on a Test/Training or Development Service Desk Express application server, I deliberately configure these services slightly differently:

Warning: Do NOT do this on a production system!

From the desktop click Start > Run and type Services.msc and press Return. From the Services window scroll down to the SDE Indexing Service and select it. Select Actions > Properties from the Services window menu and from the Properties window change the Startup Type to Manual and click Stop. Once the service has successfully stopped click OK to return to the Services window. Repeat this process for the following services: SDE Integration Engine; SDE Job Processor – SDEBR; SDE Mail Processor – SDEBR; and SDE Push Service – SDEPPS. Close the Services window.

Now if we click Start > Run and type IISRESET and press Return we can completely restart IIS flushing out all the metadata without the services reconnecting to the database. Now we can restore our database:


If you have issues with this statement the chances are that your data and log files in your production environment are on a different drive and consequently you need to use the WITH MOVE command to move them to the correct place:

RESTORE DATABASE SDE FROM DISK='C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLBackupSDE_YYYY_MM_DD_x' WITH MOVE 'SDE_data' TO 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLData', MOVE 'SDE_log' TO 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLData'

Fixing Everything Else

If only it was that simple and then you could just log in and all would be perfect! Unfortunately it is far from the case:

Owners, Logins and Users

Your first problem is that the _SMSYSADMIN_ account needs to be the owner of the SDE database unless it is a SQL Server sysadmin. It’s your choice but the former is more secure than the latter:

EXEC sp_ChangeDBOwner '_SMSYSADMIN_'
EXEC sp_AddSrvRoleMember '_SMSYSADMIN_', 'sysadmin'

Your next problem is that when you restore a db from one server to another the roles and users don’t automatically sort themselves out. As such you need to match them up using the following SQL:

EXEC sp_Change_Users_Login 'Auto_Fix', '_SMDBA_'
EXEC sp_Change_Users_Login 'Auto_Fix', 'SMSYSEXEC'
EXEC sp_Change_Users_Login 'Auto_Fix', 'MGUSER20xx'
EXEC sp_Change_Users_Login 'Auto_Fix', '_SMSYSADMIN_'

If you are using Service Desk Express 9.6 or above then you also need to run:

EXEC sp_Change_Users_Login 'Auto_Fix', 'SYSTEM ADMINISTRATION'
EXEC sp_Change_Users_Login 'Auto_Fix', 'ADMINISTRATOR'
EXEC sp_Change_Users_Login 'Auto_Fix', 'SELFSERVICE'
EXEC sp_Change_Users_Login 'Auto_Fix', 'SYSTEMACCOUNT'
EXEC sp_Change_Users_Login 'Auto_Fix', 'HELP DESK'

If you are using Service Desk Express 9.2 or below then you also need to run:

EXEC sp_Change_Users_Login 'Auto_Fix', 'MAGIC'
EXEC sp_Change_Users_Login 'Auto_Fix', 'EXTERNAL_SUPPORT'

Service Desk Express obtains its SYSADMIN password by decrypting the result from the NAMSYSWORK table (< v9.60) or by decrypting the result from the registry (v.9.60 and above). So unless your SYSADMIN password is the same in Production as it is in the Test/Training you will need to update this value. The tool to update this value is the “Set System Admin Password” utility available by clicking Start > All Programs > BMC Service Desk Express > Set System Admin Password. The problem is that this application wont work unless all the Service Desk Express logins are migrated from Production! So how do you do that?

Well the easiest route is to use the DBLogins.exe available from …. This does the hard work for you BUT only works if the Production and Test/Training systems are on the same physical network. Another option is to use sp_help_revlogin – a stored procedure that is created following this knowledge base article. This works very well even if you servers are not on the same network. The only issue with this is that, by default, it will transfer EVERY login whether used by Service Desk Express or not! So an alternative approach is to use a little bit of SQL code that seems to work rather well:

WARNING: The code below will ONLY work with Service Desk Express 9.00 and above.

DECLARE @Code NVARCHAR(30), @Type NVARCHAR(30), @SqlStatement NVARCHAR(4000), @SysAdminPwd NVARCHAR(100)
SET @SysAdminPwd = 'Enter Your SysAdmin Password Here'
OPEN cur
WHILE (@@fetch_status <> -1)
IF (@@fetch_status <> -2)
IF NOT EXISTS(SELECT loginname from master.sys.syslogins where name = @Code and dbname = 'master')
IF @Type = 'Group'
EXEC sp_ExecuteSQL @SqlStatement
EXEC sp_ExecuteSQL @SqlStatement
-- Once the login exists we just fix it up with the user in the db
EXEC sp_Change_Users_Login 'Auto_Fix', @Code

The above code is a lot simpler than it looks. Essentially all it does is gets all the users/groups from the _PERSONNEL_ and _GROUPS_ tables and then checks to see if a SQL login exists for each of them. If it doesn’t then it creates it whilst if it does then it simply fixes it up. Couple of things to point out though:

  • You need to enter your SYSADMIN password as all groups passwords are the same as the SYSADMIN password.
  • Any staff logins it creates will be created with a password of welcome

Once this is complete you can run the Set System Admin Password utility.

Are we there yet?

Surely we must be there now? I’m afraid not quite. So what is left. Well unless you took a backup whilst all of the Windows Services were turned off and no one had logged into Service Desk Express since a restart (sounding unlikely isn’t it) , then you will have backed up the process/meta data that Service Desk Express uses which we need to get rid of:


Theoretically, you can now log in (assuming everything above went according to plan) but…your business rules wont work properly, your Integration Engine packages will fail and if you are using Web Services then they will fail too! A couple more housekeeping tasks plus their explainations:

DECLARE @OldAppServerName NVARCHAR(100), @NewAppServerName NVARCHAR(100), @NewIPAddress NVARCHAR(100)
SET @OldAppServerName = 'Enter your OLD app server name here'
SET @NewAppServerName = 'Enter your NEW app server name here'
SET @NewIPAddress = 'Enter your NEW app server IP address here'
UPDATE dbo.WSSERVERS SET NAME = @NewAppServerName, IPADDRESS= @NewIPAddress WHERE NAME = @OldAppServerName

After running this lot you will need to manually start the Service Desk Express Windows Services described at the top of this post and that will automatically fix up your business rules and stop your Integration Engine and Web Services packages failing. You should now be good to go.


So as you can see it is somewhat convoluted transfering a database from one server to another and hopefully this post has gone someway to providing a solution to it. Hope it helps. As always any comments, positive or negative, are always welcome.

Financial Year To Date Figures in SQL Reporting Services Grouped By Week

I guess I should lay my cards on the table and say up front, that I love Microsoft SQL Reporting Services 2005 and can’t wait for a business opportunity to use SQL Reporting Services 2008! That said, there are some things that are just plain difficult to do (or at least I’ve found them difficult) and as such thought I’d pen a quick post about producing reports showing financial years grouped by week with/without running values – one of the things that I have found challenging in the past.

As usual I like to explain by example and for the purposes of this post I am going to use the AdventureWorks database that ships with SQL Server 2005 – in particular the Sales.SalesOrderHeader table.

The SQL bit

It is not difficult to get a simple grouping by week of year:

SELECT DatePart(wk, OrderDate) As 'Week', Sum(TotalDue) FROM Sales.SalesOrderHeader WHERE OrderDate BETWEEN '2003-04-01' AND '2004-04-01' GROUP BY DatePart(wk, OrderDate) ORDER BY 1

There are immediately three issues with this query:

  1. Week number 1 is the first week in January 2004 NOT the first week in April 2003 as it should be.
  2. The display is a week number not a start date/end date i.e. Week Beginning.
  3. If there were (which in this case there are not) gaps in sales such that there was a Null week then that week wouldn’t appear at all in the list of results.

I don’t claim to be a SQL guru and as such there may be a really simple solution to the above three issues – I just don’t know what it is. I do however, have a workaround that I find is not only effective but flexible and has a couple of simple advantages:

YearWeeks Table

What I do is create a simple table with the following structure:

CREATE TABLE dbo.YearWeeks ([Year] int NOT NULL, [WeekNo] int NOT NULL, [DateFrom] datetime NOT NULL, [DateTo] datetime NOT NULL)

Into that table I import (using an Excel spreadsheet like this) all the weeks of my financial year such that even though the date might be 2004-02-02 (in the example above) the Year column would be set as 2003 and the WeekNo column would be 45. As I want the DateTo column to be the last second of the day in question I run a little update script:

UPDATE dbo.YearWeeks SET DateTo = DATEADD(ss, 86399, DateTo) … as there are 86400 seconds in a day.

Financial Weeks Query

Now I can write a slightly different query that, whilst I know is less efficient, produces a very elegant and easy result:

SELECT YW.[DateFrom], YW.[DateTo], YW.[WeekNo], (SELECT SUM(SOH.[TotalDue]) FROM Sales.SalesOrderHeader SOH WHERE SOH.[OrderDate] BETWEEN YW.[DateFrom] AND YW.[DateTo]) As 'SalesForWeek' FROM dbo.YearWeeks YW WHERE YW.[Year] = 2003 ORDER BY 1

And that resolves all three of the above issues. Before the next financial year comes round you simply upload another set of dates and all the preparation work is already done for you.

Better Solution?

If anyone reading this has a better solution to this please post a comment and let me know as I would love to know what it is and judging by some of the posts on forums around the Internet, others may welcome it too.

The Reporting Services bit

So I promised that I would show the reporting part of this – well hopefully it is pretty simple to understand now that above workaround is in place:

ExampleFinancialReport (available for download here)

RunningValue Layout - No Chart

As you can see from the image above it is a very simple report with a table and five columns:

  • Week No: =Fields!WeekNo.Value
  • Date From: =Fields!DateFrom.Value
  • Date To: =Fields!DateTo.Value
  • Sales For Week: =Fields!SalesForWeek.Value
  • Cumulative Sales: =RunningValue(Fields!SalesForWeek.Value, Sum, Nothing)

Charts and RunningValues

As a final thought, let’s say you want a graph that shows the cumulative sales over the weeks of the year like the one below:

RunningValue Example Report

The trick here is that you need a “Dummy” category group before the actual (in our case WeekNo) category group to provide scope for the running value as shown here:

Running Value Layout RunningValue Chart Properties

Then the expression for the Cumulative Sales values becomes:

=RunningValue(Fields!SalesForWeek.Value, Sum, “DummyGroup”)


Hope this helps someone – any better ideas would be very welcome and as always, feedback (positive or negative) is always welcome.

Students, Apple, VMWare and Microsoft DreamSpark

I am a genuine student of the Open University (www.open.ac.uk) having so far completed three out of lots of modules in a bid to achieve a Masters Degree in Computing for Commerce and Industry. As a general rule of thumb, so far, this aim has cost me a lot of money (Open University courses are not cheap!) but recently I have been pleasantly surprised by the discounts/freebies that are available to students. So without further ado, and as an introduction to the fact that in future you may well see posts relating to Apple Macs appearing here, I thought I would share a couple of the discounts I have recently received:

Apple iMac

I finally took the plunge and bought an Apple iMac 24″ 3.06GHz Intel Core 2 Duo with 4GB 800MHz DDR2 SDRAM. The retail price for this either in an Apple shop or online for us UK customers is £1457.00. In case anyone is interested by the way, it is absolutely gorgeous and I don’t think I’ll be using a PC anytime soon for anything other than work!

Anyway, I got 12% off the Apple iMac and 11% off the 2GB RAM upgrade resulting in a saving of £184.64. Granted it is still a lot of money but hey, £184.64 is worth having!

VMWare Fusion

I do a lot of work on Microsoft Windows (both servers and desktops) and consequently, at least for the time being, I need access to these environments. VMWare sell a very cool product in the form of VMWare Fusion 2! Essentially this provides virtual environments for the Mac in the same way as their Workstation product provided for Windows. Microsoft have a similar (FREE) product called Virtual PC which is also rather neat. However, the really cool thing here is that a) I can happily run Linux machines as well as Windows ones and b) I can place links on my Apple desktop to files and applications on my virtual Windows machines using the amazing Unity feature! Take a look at some of the videos on YouTube regarding VMWare Fusion 2.

Anyway, I got 50% off VMWare Fusion 2 resulting in a saving of £33.29.

Microsoft DreamSpark

I mentioned I did a lot of work on Microsoft Windows – usually in the form of Service Desk Express stuff or .NET software development in general. Whilst hunting around for something else on the Internet (as you do) I came across a result offering “Microsoft Gives Students Access to Technical Software at No Charge to Inspire Success and Make a Difference” alternatively known as DreamSpark.

Now, subject (of course) to certain licensing agreements (e.g. not using the software for commercial purposes), students can download and use the full version of the following Microsoft products:

  • Microsoft Visual Studio 2008 Professional
  • Microsoft Visual Studio 2005 Professional
  • Microsoft XNA Game Studio 2.0
  • Microsoft Expression Studio 2
  • Windows Server 2003 R2
  • Windows Server 2008 Standard
  • Microsoft SQL Server 2008 Developer

This is a brilliant initiative by Microsoft and an excellent way of students developing essential new skills e.g. Silverlight 2.0 in the forthcoming year.

Hope everyone had a great holiday period and I look forward to receiving your comments throughout this year.

How to dynamically filter a popup list in Service Desk Express

If you have a table that is reused in different circumstances, e.g. the Vendors table which is used to store both manufacturers and suppliers, or if you have significant amounts of legacy data that you still need to track, you may need to filter your popup lists to make it more manageable for your support staff.

Imagine a scenario where you have inherited a legacy asset base and since then have introduced initiatives such as strategic sourcing and managed desktops. So in the past you may have had to create asset records for inkjet printers and CRT monitors and, as those assets are probably still within your enterprise, they still need to exist in the configuration items and configuration item type tables but going forward, the assets you will be adding will be a much more standardised. Add to this, whilst you still need to maintain a large vendor list for legacy reasons, the actual vendors that you will be using in the future will be significantly less.

So, if this sounds familiar what options are available to you to handle this? The first and easiest option is simply to inactivate all the records that you don’t want to see. My issue with this approach is that a) inactive to me symbolises deleted without loss of history e.g. asset has been disposed of, and b) there are certain features of the application that won’t work properly if an record is inactive – e.g. the booking in of purchase requests where the configuration item type is inactive.

A more flexible option is to use the built feature to filter what appears in the popup and when combined with a little database administration work provides, in my opinion, an entirely better solution.

Filtering the vendors

Using the DB Admin Tool I can create an additional field called “Is Manufacturer” in the vendors module as shown below:

I am then going to create another field, exactly the same as the one above except that the default will be “No”, called “Is Supplier” like this:

Now, after logging back into Service Desk Express, I can use the Form Customisation tool to create a copy of the CI Type form. I want to filter the Sales and Service Vendor ID fields to only show vendors where the “Is Supplier” checkbox is ticked and the Manufacturer Name field to only show vendors where the “Is Manufacturer” checkbox is ticked. All I need to do is double-click on the Sales Vendor ID field, select the Filter tab at the top of the screen and enter the code as shown below:

…and then repeat for the Service Vendor field and the Manufacturer field not forgetting to change the code for this field to:

Is Manufacturer = 1

Obviously, you need to add the “Is Supplier” and “Is Manufacturer” checkboxes to your Vendor form but other than that that is it. That should make you Procurement team’s life a little easier.

Filtering the configuration item types (or inventory catalogue)

Now what I want to do is make the engineers’ life a little easier. When they have to create an asset/configuration item, I want them to have a filtered list of CI Types where a) the CI Type is approved and b) the CI Type is flagged to either track asset or track serial.

All I need to do is modify the Configuration Item form and and modify the CI Type field adding a filter as shown below:


The beauty of using this method is that you can create other versions of the same forms that don’t filter the list i.e. for Administrator’s use, and enhance to the user’s experience of Service Desk Express.

Auto-Refresh An ASPX Page Programmatically

I wanted to be able to allow a user of one of my applications to specify an auto-refresh rate for a summary page. There are a whole bunch of ways of doing this – some discussed below – each having their own strengths and weaknesses as techniques. Given my experience, I thought I would share a) another way of doing it and b) some of the challenges I came across when trying to use other techniques.


Before I begin I think it would help to show, in a little more detail, what I was trying to accomplish. The screenshot below is from my TechMagic project that essentially allows users a readonly view of a bunch of common tables in Service Desk Express as a series of datagrids housed in an accordian control.

I wanted users to be able to set their own refresh rates that persisted sessions without storing anything in a database and as such chose to make use of cookies. That was the easy bit – automatically refreshing the page became a little more challenging.

Standard Methods of Auto-Refresh


The HTTP-EQUIV method of auto-refreshing a page is very simple. Essentially between the <head></head> tags of the HTML you simply include a line: <meta http-equiv=”refresh” content=”5″ />. This would cause the page to refresh every 5 seconds. To make this dynamic such that a user can enter a value all we do is change the line slightly to allow access to it programatically:

<meta id="autoRefresh" runat="server" >

Now in the code-behind we can right some code in the Page_Load event that looks something like:

autoRefresh.HttpEquiv = "refresh";
autoRefresh.Content = txtRefreshRate.Text;

JavaScript Method

The JavaScript method is also very easy to implement. We just create a function that puts a timeout on the page:

<script type="text/javascript" language="javascript">
function refreshPage()
window.location = unescape(window.location);
function setTimeOut()
window.setTimeout(refreshPage, document.getElementById("txtRefreshRate").value * 1000);

<body onload="setTimeOut()">

A slightly different approach

Both these methods essentially do the same thing and work perfectly well. The problem is that if my users had applied sorting to any of the datagrids or collapsed/expanded different panels, all these settings were forgotten after the page refreshed. I rapidly realised what I ACTUALLY wanted to do was a programatic postback rather than a refresh. So here is how I did it.

Essentially, I used the JavaScript method above but instead of setting the window.location in the refreshPage function, I simply click the Update button that you can see in the screenshot:

function refreshPage()
var btn = document.getElementById("btnUpdate");
if (btn) btn.click();

This works perfectly but seems, at least to me, a very clumsy approach so as always, I welcome any feedback (positive or negative).