RSS

Monthly Archives: April 2012

How receive back JSON data from the server

By: http://p-xr.com/

getJSON(String url)


public static JSONObject getJSONfromURL(String url){

//initialize
 InputStream is = null;
 String result = "";
 JSONObject jArray = null;

//http post
 try{
 HttpClient httpclient = new DefaultHttpClient();
 HttpPost httppost = new HttpPost(url);
 HttpResponse response = httpclient.execute(httppost);
 HttpEntity entity = response.getEntity();
 is = entity.getContent();

}catch(Exception e){
 Log.e("log_tag", "Error in http connection "+e.toString());
 }

//convert response to string
 try{
 BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8);
 StringBuilder sb = new StringBuilder();
 String line = null;
 while ((line = reader.readLine()) != null) {
 sb.append(line + "\n");
 }
 is.close();
 result=sb.toString();
 }catch(Exception e){
 Log.e("log_tag", "Error converting result "+e.toString());
 }

//try parse the string to a JSON object
 try{
 jArray = new JSONObject(result);
 }catch(JSONException e){
 Log.e("log_tag", "Error parsing data "+e.toString());
 }

return jArray;
}

The code above can be divided in 3 parts.

  1. the first part makes the HTTP call
  2. the second part converts the stream into a String
  3. the third part converts the string to a JSONObject

Referenced by: http://p-xr.com/android-tutorial-how-to-parse-read-json-data-into-a-android-listview/

 
Leave a comment

Posted by on April 27, 2012 in Android, WCF

 

Missing Index Feature of SQL Server 2008 Management Studio

By: 

Problem
While looking through the new features and improvements in SQL Server 2008 Management Studio (SSMS) we found a potentially interesting one called Missing Index Hints. Database Developers and Administrators can use Missing Index Hints feature to quickly identify the columns on which adding an index can help running the query faster. Can you give us a detailed explanation of how we go about using Missing Index Hints feature?

Solution
Identifying missing indexes is always challenging for a Database Administrator. In SQL Server 2008 Management Studiothere is an excellent feature to identify missing indexes for a particular query when the T-SQL code is executed within the Query window. In order to use this feature you need to click CTRL + M or select “Include Actual Execution Plan” option from the Query Menu of SQL Server 2008 Management Studio before executing the actual code.

Sample T-SQL Query

Let us go through an example by running the below mentioned T-SQL code in SQL Server 2008 Management Studio once “Include Actual Execution Plan” option is selected. The below T-SQL code needs to be run against Sales.Store table of the AdventureWorks database.

USE AdventureWorks
GO
SELECT CustomerID, Name, SalesPersonID, ModifiedDate
FROM Sales.Store
WHERE (Name=’Bike World’ AND ModifiedDate > ‘2004-10-01’)
GO

Once the above T-SQL code has executed successfully, you need to navigate to Execution Plan tab as shown in the below snippet. SQL Server 2008 Management Studio will display information related to all the missing indexes in the Execution Plan tab for that particular T-SQL code. The best part of this feature is that you can even connect to an SQL Server 2005 instance from SQL Server 2008 Management Studio and get the information with respect to missing indexes. The information is shown from the data which is collected by the Dynamic Management Views since the last time the SQL Server 2008 or SQL Server 2005 instance has been restated.

You can move the mouse pointer on top of Missing Index text and it will display the T-SQL code which is required to create the missing index as suggested by SQL Server 2008 Management Studio.

A DBA can generate the T-SQL code to create the missing index by right clicking Missing Index text and then select theMissing Index Details… option from the drop down list.

The T-SQL code below will be generated by the SQL Server 2008 Management Studio when the Missing Index Details… option is selected.

/*
Missing Index Details from MissingIndexesHint.sql – LOCALHOST\SQL2008.AdventureWorks (sa (54))
The Query Processor estimates that implementing the following index could improve the query cost by 95.0908%.
*/
/*
USE [AdventureWorks]
GO

CREATE NONCLUSTERED INDEX []
ON [Sales].[Store] ([Name],[ModifiedDate])

GO

*/

Here are a few key notes from the SQL Server 2008 Management Studio script:

  • The associated cost benefit by adding the index is included.  Be sure to review the query plan to validate the query is using the index and test the query with the new index as a portion of your change management process.
  • Please note that the script does not include an index name, so this will need to be created based on your naming standards.

Below is the final script that can be used in your change management process:

USE [AdventureWorks]
GO
CREATE NONCLUSTERED INDEX [IX_Store_Name_Modified_Date]
ON [Sales].[Store] ([Name],[ModifiedDate])

GO

Next Steps

Referenced by: http://www.mssqltips.com/sqlservertip/1945/missing-index-feature-of-sql-server-2008-management-studio/

 
Leave a comment

Posted by on April 20, 2012 in SQL Server

 

Top 10 steps to optimize data access in SQL Server: Part II (Re-factor TSQL and apply best practices)

By 

Introduction

Remember we were in a mission? Our mission was to optimize the performance of a SQL Server database. We had an application that was built on top of that database. The application was working pretty fine while testing, but soon after deployment at production, it started to perform slowly as the data volume increased in the database. Within a few months, the application started performing so slowly that the poor developers (including me) had to start this mission to optimize the database and thus, optimize the application.

Please have a look at the previous article to know how it started and what we did to start the optimization process:

Well, in the first 3 steps (discussed in the previous article), we implemented indexing in our database. That was because we had to do something that improved the database performance in a quick amount of time, with the least amount of effort. But, what if our data access code was written in an inefficient way? What if our TSQLs were written poorly?

Applying indexing will obviously improve data access performance, but at the most basic level, in any data access optimization process, you have to make sure that you have written your data access code and TSQLs in the most efficient manner, applying the best practices.

So, in this article, we are going to focus on writing or refactoring data access code using the best practices. But, before we start playing the game, we need to prepare the ground first. So let’s do the groundwork in this very next step:

Step 4: Move TSQL code from the application into the database server

I know you may not like this suggestion at all. You might have used an ORM that generates all the SQL for you on the fly. Or, you or your team might have a “principle” of keeping SQL in your application code (in the Data Access Layer methods). But still, if you need to optimize data access performance, or if you need to troubleshoot a performance problem in your application, I would suggest you move your SQL code into your database server (using Stored Procedures, Views, Functions, and Triggers) from your application. Why? Well, I do have some strong reasons for this recommendation:

  • Moving SQL from application and implementing them using Stored Procedures/Views/Functions/Triggers will enable you to eliminate any duplicate SQL in your application. This will also ensure re-usability of your TSQLcodes.
  • Implementing all TSQL using database objects will enable you to analyze the TSQLs more easily to find possible inefficient codes that are responsible for the slow performance. Also, this will let you manage your TSQL codes from a central point.
  • Doing this will also enable you to re-factor your TSQL codes to take advantage of some advanced indexing techniques (going to be discussed in the later parts in this series of articles). This will also help you to write more “Set based” SQLs along with eliminating any “Procedural” SQLs that you might have already written in your application.

Despite the fact that indexing (in Step 1 to Step 3) will let you troubleshoot performance problems in your application in a quick time (if properly done), following step 4 might not give you a real performance boost instantly. But, this will mainly enable you to perform other subsequent optimization steps and apply other techniques easily to further optimize your data access routines.

If you have used an ORM (say, NHibernate) to implement the data access routines in your application, you might find your application performing quite well in your development and test environment. But if you face performance problems in a production system where lots of transactions take place each second, and where too many concurrent database connections are there, in order to optimize your application’s performance, you might have to re-think about your ORM based data access logic. It is possible to optimize an ORM based data access routine, but, it is always true that if you implement your data access routines using TSQL objects in your database, you have the maximum opportunity to optimize your database.

If you have come this far while trying to optimize your application’s data access performance, come on, convince your management and get some time to implement a TSQL object based data operational logic. I can promise you, spending one or two man-months doing this might save you a man-year in the long run!

OK, let’s assume that you have implemented your data operational routines using TSQL objects in your database. Having done this step, you are done with the “ground work” and ready to start playing. Let’s move towards the most important step in our optimization adventure. We are going to re-factor our data access code and apply best practices.

Step 5: Identify inefficient TSQL, re-factor, and apply best practices

No matter how good indexing you apply to your database, if you use poorly written data retrieval/access logic, you are bound to get slow performance.

We all want to write good code, don’t we? While we write data access routines for a particular requirement, we really have lots of options to follow for implementing a particular data access routine (and the application’s business logic). But, in most cases, we have to work in a team with members of different caliber, experience, and ideologies. So, while at development, there are strong chances that our team members may write code in different ways, and some of them will skip best practices. While writing code, we all want to “get the job done” first (most of the time). But when our code runs in production, we start to see the problems.

Time to re-factor the code now. Time to implement the best practices in your code.

I have some SQL best practices for you that you can follow. But I am sure that you already know most of them. Problem is, in reality, you just don’t implement these good stuff in your code (of course, you always have some good reasons for not doing so). But what happens at the end of the day? Your code runs slowly, and your client becomes unhappy.

While you should know that best practices alone is not enough, you have to make sure that you follow the best practices while writing TSQL. This is the most important thing to remember.

Some TSQL Best Practices

Don’t use “SELECT*” in a SQL query
  • Unnecessary columns may get fetched that will add expense to the data retrieval time.
  • The database engine cannot utilize the benefit of “Covered Index” (discussed in the previous article), and hence the query performs slowly.
Avoid unnecessary columns in the SELECT list and unnecessary tables in join conditions
  • Selecting unnecessary columns in a Select query adds overhead to the actual query, specially if the unnecessary columns are of LOB types.
  • Including unnecessary tables in join conditions forces the database engine to retrieve and fetch unnecessary data and increases the query execution time.
Do not use the COUNT() aggregate in a subquery to do an existence check
    • Do not use:
 Collapse | Copy Code
SELECT column_list FROM table WHERE 0 < (SELECT count(*) FROM table2 WHERE ..)

Instead, use:

 Collapse | Copy Code
SELECT column_list FROM table WHERE EXISTS (SELECT * FROM table2 WHERE ...)
  • When you use COUNT(), SQL Server does not know that you are doing an existence check. It counts all matching values, either by doing a table scan or by scanning the smallest non-clustered index.
  • When you use EXISTS, SQL Server knows you are doing an existence check. When it finds the first matching value, it returns TRUE and stops looking. The same applies to using COUNT() instead of IN or ANY.
Try to avoid joining between two types of columns
    • When joining between two columns of different data types, one of the columns must be converted to the type of the other. The column whose type is lower is the one that is converted.
    • If you are joining tables with incompatible types, one of them can use an index, but the query optimizer cannot choose an index on the column that it converts. For example:
 Collapse | Copy Code
SELECT column_list FROM small_table, large_table WHERE
smalltable.float_column = large_table.int_column

In this case, SQL Server converts the integer column to float, because int is lower in the hierarchy thanfloat. It cannot use an index on large_table.int_column, although it can use an index onsmalltable.float_column.

Try to avoid deadlocks
  • Always access tables in the same order in all your Stored Procedures and triggers consistently.
  • Keep your transactions as short as possible. Touch as few data as possible during a transaction.
  • Never, ever wait for user input in the middle of a transaction.
Write TSQL using “Set based approach” rather than “Procedural approach”
  • The database engine is optimized for Set based SQL. Hence, Procedural approach (use of Cursor or UDF to process rows in a result set) should be avoided when large result sets (more than 1000) have to be processed.
  • How can we get rid of “Procedural SQL”? Follow these simple tricks:
    • Use inline sub queries to replace User Defined Functions.
    • Use correlated sub queries to replace Cursor based code.
    • If procedural coding is really necessary, at least, use a table variable instead of a cursor to navigate and process the result set.

For more info on “set” and “procedural” SQL, see Understanding “Set based” and “Procedural” approaches in SQL.

Try not to use COUNT(*) to obtain the record count in a table
    • To get the total row count in a table, we usually use the following Select statement:
 Collapse | Copy Code
SELECT COUNT(*) FROM dbo.orders

This query will perform a full table scan to get the row count.

    • The following query would not require a full table scan. (Please note that this might not give you 100% perfect results always, but this is handy only if you don’t need a perfect count.)
 Collapse | Copy Code
SELECT rows FROM sysindexes 
WHERE id = OBJECT_ID('dbo.Orders') AND indid < 2
Try to avoid dynamic SQL

Unless really required, try to avoid the use of dynamic SQL because:

  • Dynamic SQL is hard to debug and troubleshoot.
  • If the user provides the input to the dynamic SQL, then there is possibility of SQL injection attacks.
Try to avoid the use of temporary tables
  • Unless really required, try to avoid the use of temporary tables. Rather use table variables.
  • In 99% of cases, table variables reside in memory, hence it is a lot faster. Temporary tables reside in the TempDb database. So operating on temporary tables require inter database communication and hence will be slower.
Instead of LIKE search, use full text search for searching textual data

Full text searches always outperform LIKE searches.

  • Full text searches will enable you to implement complex search criteria that can’t be implemented using a LIKEsearch, such as searching on a single word or phrase (and optionally, ranking the result set), searching on a word or phrase close to another word or phrase, or searching on synonymous forms of a specific word.
  • Implementing full text search is easier to implement than LIKE search (especially in the case of complexsearch requirements).
  • For more info on full text search, see http://msdn.microsoft.com/en-us/library/ms142571(SQL.90).aspx
Try to use UNION to implement an “OR” operation
  • Try not to use “OR” in a query. Instead use “UNION” to combine the result set of two distinguished queries. This will improve query performance.
  • Better use UNION ALL if a distinguished result is not required. UNION ALL is faster than UNION as it does not have to sort the result set to find out the distinguished values.
Implement a lazy loading strategy for large objects
  • Store Large Object columns (like VARCHAR(MAX)ImageText etc.) in a different table than the main table, and put a reference to the large object in the main table.
  • Retrieve all the main table data in a query, and if a large object is required to be loaded, retrieve the large object data from the large object table only when it is required.
Use VARCHAR(MAX), VARBINARY(MAX), and NVARCHAR(MAX)
  • In SQL Server 2000, a row cannot exceed 8000 bytes in size. This limitation is due to the 8 KB internal page size of SQL Server. So to store more data in a single column, you need to use TEXTNTEXT, or IMAGE data types (BLOBs) which are stored in a collection of 8 KB data pages.
  • These are unlike the data pages that store other data in the same table. These pages are arranged in a B-tree structure. These data cannot be used as variables in a procedure or a function, and they cannot be used inside string functions such as REPLACECHARINDEX, or SUBSTRING. In most cases, you have to use READTEXT,WRITETEXT, and UPDATETEXT.
  • To solve this problem, use VARCHAR(MAX)NVARCHAR(MAX), and VARBINARY(MAX) in SQL Server 2005. These data types can hold the same amount of data BLOBs can hold (2 GB), and they are stored in the same type of data pages used for other data types.
  • When data in a MAX data type exceeds 8 KB, an over-flow page is used (in the ROW_OVERFLOW allocation unit), and a pointer to the page is left in the original data page in the IN_ROW allocation unit.
Implement the following good practices in User Defined Functions
  • Do not call functions repeatedly within your Stored Procedures, triggers, functions, and batches. For example, you might need the length of a string variable in many places of your procedure, but don’t call the LEN function whenever it’s needed; instead, call the LEN function once, and store the result in a variable for later use.
Implement the following good practices in Stored Procedures
  • Do not use “SP_XXX” as a naming convention. It causes additional searches and added I/O (because the system Stored Procedure names start with “SP_”). Using “SP_XXX” as the naming convention also increases the possibility of conflicting with an existing system Stored Procedure.
  • Use “Set Nocount On” to eliminate extra network trip.
  • Use the WITH RECOMPILE clause in the EXECUTE statement (first time) when the index structure changes (so that the compiled version of the Stored Procedure can take advantage of the newly created indexes).
  • Use default parameter values for easy testing.
Implement the following good practices in Triggers
  • Try to avoid the use of triggers. Firing a trigger and executing the triggering event is an expensive process.
  • Never use triggers that can be implemented using constraints.
  • Do not use the same trigger for different triggering events (Insert, Update, Delete).
  • Do not use transactional code inside a trigger. The trigger always runs within the transactional scope of the code that fires the trigger.
Implement the following good practices in Views
  • Use views for re-using complex TSQL blocks, and to enable it for indexed views (Will be discussed later).
  • Use views with the SCHEMABINDING option if you do not want to let users modify the table schema accidentally.
  • Do not use views that retrieve data from a single table only (that will be an unnecessary overhead). Use views for writing queries that access columns from multiple tables.
Implement the following good practices in Transactions
  • Prior to SQL Server 2005, after BEGIN TRANSACTION and each subsequent modification statement, the value of @@ERROR had to be checked. If its value was non-zero, then the last statement caused an error, and if an error occurred, the transaction had to be rolled back and an error had to be raised (for the application). In SQLServer 2005 and onwards, the Try...Catch block can be used to handle transactions in TSQL. So try to useTry...Catch based transactional code.
  • Try to avoid nested transactions. Use the @@TRANCOUNT variable to determine whether a transaction needs to be started (to avoid nested transactions).
  • Start a transaction as late as possible and commit/rollback the transaction as fast as possible to reduce the time period of resource locking.

And, that’s not the end. There are lots of best practices out there! Try finding some of them from the following URL:MSDN.

Remember, you need to implement the good things that you know; otherwise, your knowledge will not add any value to the system that you are going to build. Also, you need to have a process for reviewing and monitoring the code (that is written by your team) to see whether the data access code is being written following the standards and best practices.

How to analyze and identify scope for improvement in your TSQL?

In an ideal world, you always prevent diseases rather than cure. But, in reality, you just can’t prevent always. I know your team is composed of brilliant professionals. I know you have a good review process, but still bad code is written and poor design takes place. Why? Because, no matter what advanced technology you are going to use, your client requirement will always be way much advanced, and this is a universal truth in software development. As a result, designing, developing, and delivering a system based on requirements will always be a challenging job for you.

So, it’s equally important that you know how to cure. You really need to know how to troubleshoot a performance problem after it happens. You need to learn ways to analyze yout TSQL code, identify the bottlenecks, and re-factor those to troubleshoot performance problems. There are numerous ways to troubleshoot database and TSQLperformance problems, but at the most basic level, you have to understand and review the execution plan of the TSQL that you need to analyze.

Understanding the query execution plan

Whenever you issue a SQL statement in the SQL Server engine, SQL Server first has to determine the best possible way to execute it. In order to carry this out, the Query Optimizer (a system that generates the optimal query execution plan before executing the query) uses several information like the data distribution statistics, index structure, metadata, and other information to analyze several possible execution plans and finally select one that is likely to be the best execution plan most of the time.

Did you know? You can use SQL Server Management Studio to preview and analyze the estimated execution plan for the query that you are going to issue. After writing the SQL in SQL Server Management Studio, click on the estimated execution plan icon (see below) to see the execution plan before actually executing the query.

(Note: Alternatively, you can switch the actual execution plan option “on” before executing the query. If you do this, Management Studio will include the actual execution plan that is being executed along with the result set in the result window.)

Estimated_execution_plan.jpg

Estimated execution plan in Management Studio

Understanding the query execution plan in detail

Each icon in the execution plan graph represents an action item (Operator) in the plan. The execution plan has to be read from right to left, and each action item has a percentage of cost relative to the total execution cost of the query (100%).

In the above execution plan graph, the first icon in the right most part represents a “Clustered Index Scan” operation (reading all primary key index values in the table) in the HumanResources table (that requires 100% of the total query execution cost), and the left most icon in the graph represents a SELECT operation (that requires only 0% of the total query execution cost).

Following are the important icons and their corresponding operators you are going to see frequently in the graphical query execution plans:

QueryPlanOperators.JPG

(Each icon in the graphical execution plan represents a particular action item in the query. For a complete list of the icons and their corresponding action items, go to http://technet.microsoft.com/en-us/library/ms175913.aspx.)

Note the “Query cost” in the execution plan given above. It has 100% cost relative to the batch. That means, this particular query has 100% cost among all queries in the batch as there is only one query in the batch. If there were multiple queries simultaneously executed in the query window, each query would have its own percentage of cost (less than 100%).

To know more details for each particular action item in the query plan, move the mouse pointer on each item/icon. You will see a window that looks like the following:

Query_plan_info.jpg

This window provides detailed estimated information about a particular query item in the execution plan. The above window shows the estimated detailed information for the clustered index scan and it looks for the row(s) which have/has Gender = ‘M’ in the Employee table in HumanResources schema in the AdventureWorks database. The window also shows the estimated IO, CPU, number of rows, with the size of each row, and other costs that is used to compare with other possible execution plans to select the optimal plan.

I found an article that can help you further understand and analyze TSQL execution plans in detail. You can take a look at it here: http://www.simple-talk.com/sql/performance/execution-plan-basics/.

What information do we get by viewing the execution plans?

Whenever any of your query performs slowly, you can view the estimated (and, actual if required) execution plan and can identify the item that is taking the most amount of time (in terms of percentage) in the query. When you start reviewing any TSQL for optimization, most of the time, the first thing you would like to do is view the execution plan. You will most likely quickly identify the area in the SQL that is creating the bottleneck in the overall SQL.

Keep watching for the following costly operators in the execution plan of your query. If you find one of these, you are likely to have problems in your TSQL and you need to re-factor the TSQL to improve performance.

Table Scan: Occurs when the corresponding table does not have a clustered index. Most likely, creating a clustered index or defragmenting index will enable you to get rid of it.

Clustered Index Scan: Sometimes considered equivalent to Table Scan. Takes place when a non-clustered index on an eligible column is not available. Most of the time, creating a non-clustered index will enable you to get rid of it.

Hash Join: The most expensive joining methodology. This takes place when the joining columns between two tables are not indexed. Creating indexes on those columns will enable you to get rid of it.

Nested Loops: Most cases, this happens when a non-clustered index does not include (Cover) a column that is used in the SELECT column list. In this case, for each member in the non-clustered index column, the database server has to seek into the clustered index to retrieve the other column value specified in the SELECT list. Creating a covered index will enable you to get rid of it.

RID Lookup: Takes place when you have a non-clustered index but the same table does not have any clustered index. In this case, the database engine has to look up the actual row using the row ID, which is an expensive operation. Creating a clustered index on the corresponding table would enable you to get rid of it.

TSQL Refactoring – A real life story

Knowledge comes into value only when applied to solve real-life problems. No matter how knowledgeable you are, you need to utilize your knowledge in an effective way in order to solve your problems.

Let’s read a real life story. In this story, Mr. Tom is one of the members of the development team that built the application that we mentioned earlier.

When we started our optimization mission in the data access routines (TSQLs) of our application, we identified a Stored Procedure that was performing way below the expected level of performance. It was taking more than 50 seconds to process and retrieve sales data for one month for particular sales items in the production database. Following is how the Stored Procedure was getting invoked for retrieving sales data for ‘Caps’ for the year 2009:

 Collapse | Copy Code
exec uspGetSalesInfoForDateRange '1/1/2009', 31/12/2009,'Cap'

Accordingly, Mr. Tom was assigned to optimize the Stored Procedure.

Following is a Stored Procedure that is somewhat close to the original one (I can’t include the original Stored Procedure for proprietary issues):

 Collapse | Copy Code
ALTER PROCEDURE uspGetSalesInfoForDateRange
    @startYear DateTime,
    @endYear DateTime,
    @keyword nvarchar(50)
AS
BEGIN
    SET NOCOUNT ON;
    SELECT
       Name,
       ProductNumber,
       ProductRates.CurrentProductRate Rate,
    ProductRates.CurrentDiscount Discount,
    OrderQty Qty,
    dbo.ufnGetLineTotal(SalesOrderDetailID) Total,
    OrderDate,
    DetailedDescription
    FROM
    Products INNER JOIN OrderDetails
    ON Products.ProductID = OrderDetails.ProductID
    INNER JOIN Orders
    ON Orders.SalesOrderID = OrderDetails.SalesOrderID
    INNER JOIN ProductRates
    ON
    Products.ProductID = ProductRates.ProductID
    WHERE
    OrderDate between @startYear and @endYear
    AND
    (
        ProductName LIKE '' + @keyword + ' %' OR
        ProductName LIKE '% ' + @keyword + ' ' + '%' OR
        ProductName LIKE '% ' + @keyword + '%' OR
        Keyword LIKE '' + @keyword + ' %' OR
        Keyword LIKE '% ' + @keyword + ' ' + '%' OR
        Keyword LIKE '% ' + @keyword + '%'
    )
    ORDER BY
    ProductName
END

GO

Analyzing the indexes

As a first step, Mr. Tom wanted to review the indexes of the tables that were being queried in the Stored Procedure. He had a quick look into the query and identified the fields that the tables should have indexes on (for example, fields that have been used in the join queries, WHERE conditions, and ORDER BY clauses). Immediately, he found that several indexes are missing on some of these columns. For example, indexes on the following two columns were missing:

  • OrderDetails.ProductID
  • OrderDetails.SalesOrderID

He created non-clustered indexes on those two columns, and executed the Stored Procedure as follows:

 Collapse | Copy Code
exec uspGetSalesInfoForDateRange '1/1/2009', 31/12/2009 with recompile

The Stored Procedure’s performance was improved now, but still below the expected level (35 seconds). (Note the “with recompile” clause. It forces the SQL Server engine to recompile the Stored Procedure and re-generate the execution plan to take advantage of the newly built indexes).

Analyzing the query execution plan

Mr. Tom’s next step was to see the execution plan in the SQL Server Management Studio. He did this by writing the ‘exec‘ statement for the Stored Procedure in the query window and viewing the “Estimated execution plan”. (The execution plan is not included here as it is quite a big one that is not going to fit in the screen.)

Analyzing the execution plan, he identified some important scopes for improvement:

  • A table scan was taking place on a table while executing the query even though the table has a proper indexing implementation. The table scan was taking 30% of the overall query execution time.
  • A “nested loop join” (one of three kinds of joining implementation) was occurring for selecting a column from a table specified in the SELECT list in the query.

Curious about the table scan issue, Mr. Tom wanted to know if any index fragmentation took place or not (because all indexes were properly implemented). He ran a TSQL that reports the index fragmentation information on table columns in the database (he collected this from a CodeProject article on data access optimization) and was surprised to see that two of the existing indexes (in the corresponding tables used in the TSQL in the Stored Procedure) had fragmentation that were responsible for the table scan operation. Immediately, he defragmented those two indexes and found out that the table scan was not occurring and the Stored Procedure was taking 25 seconds now to execute.

In order to get rid of the “nested loop join”, he implanted a “Covered index” in the corresponding table including the column in the SELECT list. As a result, when selecting the column, the database engine was able to retrieve the column value in the non-clustered index node. Doing this reduced the query performance up to 23 seconds now.

Implementing some best practices

Mr. Tom now decided to look for any piece of code in the Stored Procedure that did not conform to the best practices. Following were the changes that he did to implement some best practices:

Getting rid of the “Procedural code”

Mr. Tom identified that a UDF ufnGetLineTotal(SalesOrderDetailID) was getting executed for each row in the result set, and the UDF simply was executing another TSQL using a value in the supplied parameter and was returning a scalar value. Following was the UDF definition:

 Collapse | Copy Code
ALTER FUNCTION [dbo].[ufnGetLineTotal]
(
    @SalesOrderDetailID int
)
RETURNS money
AS
BEGIN

    DECLARE @CurrentProductRate money
    DECLARE @CurrentDiscount money
    DECLARE @Qty int

    SELECT
        @CurrentProductRate = ProductRates.CurrentProductRate,
        @CurrentDiscount = ProductRates.CurrentDiscount,
        @Qty = OrderQty
    FROM
        ProductRates INNER JOIN OrderDetails ON
        OrderDetails.ProductID = ProductRates.ProductID
    WHERE
        OrderDetails.SalesOrderDetailID = @SalesOrderDetailID

    RETURN (@CurrentProductRate-@CurrentDiscount)*@Qty
END

This seemed to be a “Procedural approach” for calculating the order total, and Mr. Tom decided to implement the UDF’s TSQL as an inline SQL in the original query. Following was the simple change that he had to implement in the Stored Procedure:

 Collapse | Copy Code
dbo.ufnGetLineTotal(SalesOrderDetailID) Total        -- Old Code
(CurrentProductRate-CurrentDiscount)*OrderQty Total -- New Code

Immediately after executing the query, Mr. Tom found that the query was taking 14 seconds now to execute.

Getting rid of the unnecessary Text column in the SELECT list

Exploring for further optimization scope, Mr. Tom decided to take a look at the column types in the SELECT list in the TSQL. Soon he discovered that a Text column (Products.DetailedDescription) was included in the SELECT list. Reviewing the application code, Mr. Tom found that this column value was not being used by the application immediately. A few columns in the result set were being displayed in a listing page in the application, and when the user clicked on a particular item in the list, a detail page was appearing containing the Text column value.

Excluding that Text column from the SELECT list dramatically reduced the query execution time from 14 seconds to 6 seconds! So, Mr. Tom decided to apply a “Lazy loading” strategy to load this Text column using a Stored Procedure that accepts an “ID” parameter and selects the Text column value. After implementation, he found out that the newly created Stored Procedure executes in a reasonable amount of time when the user sees the detail page for an item in the item list. He also converted those two “Text” columns to VARCHAR(MAX) columns, and that enabled him to use the len() function on one of these two columns in the TSQL in other places (that also allowed him to save some query execution time because he was calculating the length using len (Text_Column as Varchar(8000)) in the earlier version of the code.

Optimizing further: Process of elimination

What’s next? All the optimization steps so far reduced the execution time to 6 seconds. Comparing to the execution time of 50 seconds before optimization, this is a big achievement so far. But Mr. Tom thinks the query could have further improvement scope. Reviewing the TSQL code, Mr. Tom didn’t find any significant option left for further optimization. So he indented and re-arranged the TSQL (so that each individual query statement (say, Product.ProductID = OrderDetail.ProductID) is written in a particular line) and started executing the Stored Procedure again and again by commenting out each line that he suspected for having improvement scope.

Surprise! Surprise! The TSQL had some LIKE conditions (the actual Stored Procedure basically performed a keywordsearch on some tables) for matching several patterns against some column values. When he commented out theLIKE statements, suddenly the Stored Procedure execution time jumped below 1 second. Wow!

It seemed that having done with all the optimizations so far, the LIKE searches were taking the most amount of time in the TSQL code. After carefully looking at the LIKE search conditions, Mr. Tom became pretty sure that the LIKEsearch based SQL could easily be implemented using Full Text Search. It seemed that two columns needed to befull text search enabled. These were: ProductName and Keyword.

It just took 5 minutes for him to implement the FTS (creating the Full Text catalog, making the two columns full textenabled, and replacing the LIKE clauses with the FREETEXT function), and the query started executing now within a stunning 1 second!

Great achievement, isn’t it?

What’s next?

We’ve learned lots of things in optimizing data access code, but we’ve still miles to go. Data access optimization is an endless process that gives you endless thrills and fun. No matter how big systems you have, no matter how complex your business processes are, believe me, you can make them run faster, always!

So, let’s not stop here. Let’s go through the next article in this series:

Help optimizing. Have fun!

Referenced by: http://www.codeproject.com/Articles/35665/Top-10-steps-to-optimize-data-access-in-SQL-Server

 
Leave a comment

Posted by on April 17, 2012 in SQL Server

 

The Definite Guide to Notification Window in Silverlight 4.0

By: pluginbaby

Note: This article was first published in Web Central Station, but code format has problem there, so I republish here.
When this article was written the latest version of Silverlight was Silverlight 4.

Notification Window is a Silverlight Out-of-Browser feature (among WebBrowser control, Elevated Trust and offline DRM) that allows your application to display a notification as a little pop-out window that disappear automatically. This is sometimes called a “toast”, and is similar to the Outlook live notification when new emails get in your inbox.

Usage

The Notification Window is only available in Out-of-Browser (OOB) mode. You can take advantage of this feature to display confirmation of user’s actions (“Product saved”), application status (“Application updated, please restart”) or warn the user that data were refreshed while he is working on another application.

Silverlight Notification Window
Notification Window (”Now Playing”) used in a Silverlight Media Player app.

How to use the NotificationWindow?

  1. Check if the app is running Out-of-Browser (with App.Current.IsRunningOutOfBrowser)
  2. Instantiate a NotificationWindow object
  3. Set its Height and Width
  4. Set the Content property to something of type FrameworkElement (UserControl, TextBlock, …)
  5. Use the Show() method

As mentioned on MSDN: “You must set Height and Width on a notification window before Show is called and the notification displays. Attempting to set the height/width of a displayed notification throws an exception.”

Notification layout

The typical usage for the NotificationWindow visual is to create a UserControl to serve as the NotificationWindow.Content, so you can design the UI of the notification in Blend. But it could be something as simple as a TextBlock, or you could use a custom control which comes with its whole XAML graph (Canvas, Grid, Button, …) and code.

Show me the code!

Code to create a simple Notification:

Assuming you have a NotificationWindow nw; defined in your class, you could add this code in the click event of a Button:

// If a notification is already opened, close it
if (nw != null)
    nw.Close();

// Create a new nofitication window
nw = new NotificationWindow();
nw.Width = 400;
nw.Height = 100;

// Create a new TextBlock
TextBlock text = new TextBlock();
text.Text = "Simple Notification";
text.FontSize = 26;
text.HorizontalAlignment = HorizontalAlignment.Center;
text.VerticalAlignment = VerticalAlignment.Center;

// Set the content of the notification with the TextBlock
nw.Content = text;

// Show the notification for 3 seconds
nw.Show(3000);

alt

Code to create a custom Notification:

if (nw != null)
    nw.Close();

// Create a new nofitication window
nw = new NotificationWindow();
nw.Height = 100;
nw.Width = 400;

// Create a custom control for the Notification UI
CustomNotification customContent = new CustomNotification();
customContent.Header = "Custom Notification Header";
customContent.Text = "This is a custom Notification text.";

// Set the content of the notification with the control
nw.Content = customContent;

// Show the notification for 4 seconds
nw.Show(4000);

The code for the CustomNotification custom control is available with the source code linked below.
alt

Things you need to know

Size: 
Maximum size for a Notification Window is: 400x100px.

Appearance: 
Currently (SL4) a Notification Window does not support transparency (for security reasons), no rounded corners as well.
You can only style inside the content of the Window with your own control. The notification default to a blank background.

Multiple notifications: 
You can’t have more than 1 notification at the same time (or you’ll get an InvalidOperationException), and the API does not provide a built-in queue mechanism. But nothing prevent you from building your own system to queue several notifications, and it’s fairly easy, see Tim Heuer post.

Effect: 
No fade in/fade out like in Outlook right now. No transition/animation possible for the Notification Window itself.

Position: 
Silverlight always display the notification in the common location of your OS, on a Windows system it will appear in the lower right corner of the screen, on a Mac it is upper right.

Interactions: 
You can interact with a Notification Window but only with the mouse, no keyboard events. Also you can’t run actions like navigating from a HyperlinkButton, using SaveFileDialog or OpenFileDialog, accessing the Clipboard or switch to full-screen mode.

Duration:
The maximum duration for a notification is 30 seconds so that is NotificationWindow.Show(30000).

Download the code

https://skydrive.live.com/redir.aspx?cid=0e564ed4426a5ecc&resid=E564ED4426A5ECC!5407&parid=E564ED4426A5ECC!5370&authkey=!

Referenced by: http://weblogs.asp.net/lduveau/archive/2010/12/10/the-definite-guide-to-notification-window-in-silverlight-4.aspx

 
Leave a comment

Posted by on April 10, 2012 in Silverlight

 

Notification of Silverlight Applications about Database State Changes

By: Max Paulousky

Introduction

There is a big class of software applications that should provide the ability for users to be notified about other user’s changes. The software for stock exchanges, medical practices, and clinics have to have the above mentioned feature. Silverlight applications are suitable for such areas because they have a rich interface and they are thin clients simultaneously.

To implement the notification feature for Silverlight applications, we should implement the following architecture:

Fig. 1. Architecture of the application.

The architecture of the solution that allows to notify Silverlight applications about DB chabges.

If a client updates a particular data, then all other clients should be notified about the occurred changes. This means, between a middle and the client tiers should be established a two-way (duplex) connection.

The middle tier should save data after receiving it from the client tier. A data tier is a layer that can say that data was saved successfully, and the last one should notify the middle tier about the changes.

The database does not have direct access to the middle tier, so we should use one of the following solutions to notify it about the changes:

  • There should be a special table in the database which will be used for storing information about database changes. The middle tier should read it periodically for changes and notify the client tier about those.
  • CLR SQL triggers can be implemented to notify the middle tier via a Web Service.
  • Microsoft Notification Service can be used to notify the middle tier about database changes.

I do not think the first variant is a good one. The middle tier is monitoring the database, sending requests all the time, and it’s a waste of resources.

The third variant requires using additional software and I am leaving it for another article.

I’m going to describe a second variant and unlock the secrets of building duplex Silverlight applications, CLR triggers, and using WCF services from CLR triggers.

Background

To implement such an application, I am going to use the following technologies:

  • Silverlight for the client tier
  • WCF services (Polling Duplex Binding/WS HTTP Binding)
  • CLR triggers for SQL server

Technical requirements

The following software were used:

  • Windows XP SP3/IIS 5.1
  • VS 2008 SP1
  • .NET 3.5 SP1
  • Microsoft Silverlight Projects 2008 Version 9.0.30730.126
  • Silverlight 3.0.40818.0
  • Silverlight Toolkit July 2009
  • Silverlight 3 Developer Package
  • Silverlight 3 SDK
  • .NET RIA Services (July 2009 Preview)
  • MS SQL Server 2005

Getting started

I am going to implement a solution that consists of several parts:

  • A Web application. It hosts the Silverlight application and the Web Services (see below).
  • Silverlight application. This application will receive notifications about the database changes.
  • CLR trigger. It will contain some logic for sending data from the trigger via a WCF service.
  • SQL Server. Hosts CLR triggers.

Web application – DuplexSample.Web

The Web application is a middle tier for the Silverlight application and is based on a standard template Silverlight Navigation Application (New Project/Silverlight/Silverlight Navigation Application).

I am going to implement two WCF services. The first one (DBNotificationService) is intended for doing callbacks to the Silverlight application. The second one (DBAuditService) is intended for receiving data from CLR triggers.

Why should I use such a complex schema and cannot use the client tier service (DBNotificationService) to notify the Silverlight applications directly? The answer is simple – Polling Duplex Binding mode is not implemented for non-Silverlight applications, and any other binding mode can’t be used for implementing a duplex channel. So, I decided to implement one more service.

DBNotificationService

DBNotificationService is based on the Polling Duplex Binding protocol. This protocol allows to exchange data between the client tier (the Silverlight application) and the middle tier (the web application).

This service includes four files:

  • DBNotificationService.svc. Contains the service metadata and is used for generating the source code (proxy classes/interfaces) for the client part of the WCF service.
  • DBNotificationService.svc.cs. Contains the implementation of the WCF service.
  • IDBNotificationCallbackContract.cs. Contains an interface for the callback contract. The method (void SendNotificationToClients(DBTriggerAuditData data)) will be used to send data from the middle tier to the clients.
  • IDBNotificationService.cs. Contains an interface for the service contract that describes the methods that the client can call to subscribe (void SubscribeToNotifications()) or unsubscribe (void UnsubscribeToNotifications()) to notifications.

This article is not a guide for WCF services, but I am going to describe the details of the implementation of the mentioned service.

Implementation of the SubscribeToNotifications() method.
 Collapse | Copy Code
public void SubscribeToNotifications()
{
  IDBNotificationCallbackContract ch = 
    OperationContext.Current.GetCallbackChannel<IDBNotificationCallbackContract>();
  string sessionId = OperationContext.Current.Channel.SessionId;

  //Any message from a client we haven't seen before 
  //causes the new client to be added to our list
  //(Basically, treated as a "Connect" message)
  lock (syncRoot)
  {
    if (!SilverlightClientsList.IsClientConnected(sessionId))
    {
      SilverlightClientsList.AddCallbackChannel(sessionId, ch);
      OperationContext.Current.Channel.Closing += new EventHandler(Channel_Closing);
      OperationContext.Current.Channel.Faulted += new EventHandler(Channel_Faulted);
    }
  }
}

I call the method above on the client tier to subscribe to notifications from the middle tier. The implementation of this method gets an incoming callback channel and stores it in the list; also, this one initializes other events (OnFaultOnDisconnect; see below) of the incoming channel.

The stored channel will be used to send the notification to all clients.

Implementation of the UnsubscribeToNotifications() method.
 Collapse | Copy Code
public void UnsubscribeToNotifications()
{
    ClientDisconnect(OperationContext.Current.Channel.SessionId);
}

I call the method above on the client tier to unsubscribe to notifications from the middle tier. The implementation of this method just deletes the callback channel from the list by its identification number.

Initialization of the callback channel events; implementation of the ClientDisconnect method.
 Collapse | Copy Code
private void Channel_Closing(object sender, EventArgs e)
{
    IContextChannel channel = (IContextChannel)sender;
    ClientDisconnect(channel.SessionId);
}

private void Channel_Faulted(object sender, EventArgs e)
{
    IContextChannel channel = (IContextChannel)sender;
    ClientDisconnect(channel.SessionId);
}

private void ClientDisconnect(string sessionId)
{
    lock (syncRoot)
    {
      if (SilverlightClientsList.IsClientConnected(sessionId))
        SilverlightClientsList.DeleteClient(sessionId);
    }
}

These methods are executed when a client disconnects/is disconnected.

The class DBNotificationService is marked with the following attribute:

 Collapse | Copy Code
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, 
 InstanceContextMode = InstanceContextMode.Single)]

ConcurrencyMode = ConcurrencyMode.Multiple means the service instance is multi-threaded and a developer should care about synchronization. So, I lock the list of Silverlight channels before reading/writing, to prevent inconsistency.

InstanceContextMode = InstanceContextMode.Single means only one InstanceContext object is used for all incoming calls.

DBAuditService

DBAuditService is based on the WSHttpBinding protocol (BasicHttpBinding can be used as well). I am going to use this protocol to set a channel between the data tier (SQL Server database) and the middle tier.

So, any changes in the database will be intercepted in the appropriate DB trigger, and the last one should send a notification about the changes to the middle tier using the stored channels (callbacks).

This service consists of four files:

  • DBAuditService.svc. Contains the service metadata and is used for generating the source code (proxy classes/interfaces) for the client part of the WCF service.
  • DBAuditService.svc.cs. Contains the implementation of the WCF service.
  • DBTriggerAuditData.cs. Contains a data contract (data transfer object) that will be used for exchanging data between the data/middle tiers and the middle/client tiers. This class contains two string properties. The first one contains the name of the table for which the trigger fired, and the second one contains the audit data received from the trigger.
  • IDBAuditService.cs. Contains an interface for the service contract that describes the method that can be called to send audit data.

The service implements only one method: SendTriggerAuditData. It enumerates all the client channels and executes the SendNotificationToClients method for each channel. The parameter for this method is data we received from the trigger.

Implementation of the SendTriggerAuditData(DBTriggerAuditData data) method.
 Collapse | Copy Code
public void SendTriggerAuditData(DBTriggerAuditData data)
{
    Guard.ArgumentNotNull(data, "data");

    if (SilverlightClientsList.GetCallbackChannels().Count() > 0)
    {
      lock (syncRoot)
      {
        IEnumerable<IDBNotificationCallbackContract> channels = 
            SilverlightClientsList.GetCallbackChannels();
        channels.ToList().ForEach(c => c.SendNotificationToClients(data));
      }
    }
}

Adding non-Silverlight clients

Of course, it is not only Silverlight clients that can get notifications. One more service should be implemented to notify other clients about database changes.

This service should use WSDualHttpBinding, and it should be implemented similar to theDBNotificationService service. In this case, DBAuditService should send notification using channels of non-Silverlight clients as well.

Silverlight application – DuplexSample

This Silverlight application is a client tier based on the standard template ‘Silverlight Navigation Application’ (New Project/Silverlight/Silverlight Navigation).

I just added two controls to HomePage.xaml:

 Collapse | Copy Code
<Button Content="Connect" Click="ButtonConnect_Click" 
   x:Name="ButtonConnect" Margin="10"></Button>
<ListBox Grid.Row="1" ScrollViewer.VerticalScrollBarVisibility="Visible" 
  x:Name="ListBox1"></ListBox>

The first one is a Button, and the user can use it to connect/disconnect from a server. The second one is aListBox, and it displays a text of incoming messages.

In the constructor of the page, I just initialize the DBNotification service.

 Collapse | Copy Code
private DBNotificationClient client;
private ObservableCollection<string> liveDataMessages = 
                          new ObservableCollection<string>();

public Home()
{
    InitializeComponent();

    ListBox1.ItemsSource = liveDataMessages;

    client = new DBNotificationClient(new PollingDuplexHttpBinding(), 
       new EndpointAddress("http://localhost:2877/" + 
                           "DBNotificationService/DBNotificationService.svc"));
    client.SendNotificationToClientsReceived += (sender, e) =>
      {
        DBTriggerAuditData data = e.data;
        liveDataMessages.Add(data.TableName + ": " + data.Data);
      };
}

SendNotificationToClientsReceived is an anonymous delegate that is executed when a message from the middle tier is received.

The methods Subscribe/Unsubscribe just execute the corresponding method of the WCF service (SubscribeToNotificationsAsync/UnsubscribeToNotificationsAsync) and define the anonymous delegates that will be executed after finishing connecting/disconnecting.

 Collapse | Copy Code
private void Subscribe()
{
    ButtonSubscribe.Content = "Subscribing...";
    client.SubscribeToNotificationsCompleted += (sender, e) =>
    {
      ButtonSubscribe.Content = "Subscribed (click to unsubscribe)";
      subscribed = true;
    };
    client.SubscribeToNotificationsAsync();
}

private void Unsubscribe()
{
    ButtonSubscribe.Content = "Unsubscribing...";
    client.UnsubscribeToNotificationsCompleted += (sender, e) =>
    {
      ButtonConnect.Content = "Unsubscribed (click to subscribe)";
      subscribed = false;
    };
    client.UnsubscribeToNotificationsAsync();
}

CLR trigger – DuplexSample.SqlTriggers

This project is a class library, and it contains just one class AppUser and a static method AppUserAudit. This method creates a message according to the changed rows and fields and sends it via the DBAudit service. The code for building a log was taken from the SQL Server documentation, and you can find a lot of information about this feature there. I just added into this method to send audit data via the service.

 Collapse | Copy Code
EndpointAddress endpoint = 
  new EndpointAddress(new Uri("http://localhost:2877/" + 
                      "DBAuditService/DBAuditService.svc"));
DBAuditClient client = 
  new DBAuditClient(new WSHttpBinding(SecurityMode.None), endpoint);

DBTriggerAuditData data = new DBTriggerAuditData();
data.TableName = "[dbo].[AppUser]";
data.Data = sb.ToString();

try
{
    client.SendTriggerAuditDataCompleted += (sender, e) =>
    {
      if (e.Error != null)
        throw new ApplicationException("There was an error occured", e.Error);
    };
    client.SendTriggerAuditDataAsync(data);
}
catch (Exception ex)
{

    throw;
}

Please pay attention that the address of the service is hard-coded. I am going to explain below why I did it.

Adding a CLR trigger

To add a CLR trigger to a SQL Server, I have to do the following actions:

  • Create an assembly in the database that corresponds to the assembly with the CLR trigger (DuplexSample.SqlTriggers);
  • Create a trigger that is based on the CLR trigger.

The following SQL commands correspond to the actions above:

 Collapse | Copy Code
create ASSEMBLY [DuplexSample.SqlTriggers] FROM 
    'C:\Projects\Sandbox\DuplexSample\DuplexSample.SqlTriggers
     \bin\Debug\DuplexSample.SqlTriggers.dll' 
    WITH PERMISSION_SET = UNSAFE

where [DuplexSample.SqlTriggers] is the name of the assembly within the database, and PERMISSION_SET = UNSAFE is the level of permissions for that library (see below for details).

 Collapse | Copy Code
CREATE TRIGGER AppUserAudit
  ON AppUser
  FOR Insert,Update,Delete 
  AS
  EXTERNAL NAME [DuplexSample.SqlTriggers].AppUser.AppUserAudit

That’s all! Your trigger is ready to be fired. You can go to the appropriate table and try to change the data – the trigger will be executed, and the data will be sent via the WCF service.

CLR trigger tricks

I have spent much time before the trigger started working well. I found out several issues, and now I am going to describe them to simplify the life of my fellows.

First of all, a CLR trigger is loaded within the SQL Server process, and neither App.config nor other parameters (Assembly.Location, for example) of the assembly are available. So, I can’t even get the path to the assembly; therefore, all parameters of the WCF service should be hardcoded or indicated in a different way.

By default, SQL Server does not support calls to CLR methods, so it should be turned on manually:

 Collapse | Copy Code
EXEC sp_configure 'show advanced options' , '1';
go

reconfigure;
go

EXEC sp_configure 'clr enabled' , '1'
go

reconfigure;

EXEC sp_configure 'show advanced options' , '0';
go

SQL Server does not allow to add unsafe assemblies (if your trigger executes the WCF service, the assembly will be unsafe definitely). To allow adding unsafe assemblies, the following command should be executed:

 Collapse | Copy Code
ALTER DATABASE [Silverlight.DuplexNotification]
SET TRUSTWORTHY ON

By default, SQL Server does not allow to add unsafe assemblies even when TRUSTWORTHY is on, and I should use the special parameter (PERMISSION_SET = UNSAFE) to add the assembly to the database (see above).

Note: Setting the database to trustworthy can cause problems with security, so it would be better to use the certificate/asymmetric key scenario (see here for details).

The CLR trigger assembly has a lot of related assemblies, and they should be installed into SQL Server as well. SQL Server can install them automatically only if they are placed in the same folder as the CLR trigger assembly. I added all such assemblies to the CLR trigger project as references, and set the property Copy Local to true for each added assembly. See the project file DuplexSample.SqlTriggers.csproj for the list of added assemblies.

There is one assembly that should be added to the database manually, because SQL Server does not do it:

 Collapse | Copy Code
create ASSEMBLY [Microsoft.VisualStudio.Diagnostics.ServiceModelSink] FROM 
'C:\Projects\sandbox\DuplexSample\DuplexSample.SqlTriggers\bin\
  Debug\Microsoft.VisualStudio.Diagnostics.ServiceModelSink.dll' 
WITH PERMISSION_SET = UNSAFE

If you change your CLR trigger, just rebuilding is not enough. You have to rebuild it, delete the trigger and the assembly from the database, and add them again.

If you are going to play with CLR triggers, you need the following code to delete the trigger/assembly per saltum:

 Collapse | Copy Code
drop trigger AppUserAudit
GO
drop assembly [DuplexSample.SqlTriggers]
GO

The debugging of CLR triggers is a very simple process. You need to just set a breakpoint, attach to thesqlserver.exe process (main menu – Debug – Attach to process), and try to change the table. The most amazing thing is, the web application (the server part of WCF services) can be debugged at the same time when you are debugging the CLR trigger.

Fig. 2. Attaching to the SQL Server process to debug the CLR trigger.

This dialog allows to attach to the process of SQL server and start debugging CLR triggers.

Sometimes, notifications do not come up to the Silverlight application – just restart the built-in web server. Sometimes, the Silverlight application crashes Internet Explorer (I do not know the reason) – use Firefox instead.

Demo application

The source code of the application contains all the above techniques, and is ready to be compiled and deployed. There are some things the developer should change/update before running:

  • Change the port number/create the IIS folder for the web project (if you want to use another port or IIS instead of the built-in web server).
  • Create a database from the given scripts.

To start a demo, I should compile all the libraries, add the CLR trigger to the database, run one or several Silverlight applications and subscribe to notifications, and do any change for the table for which the CLR trigger was created.

The following figures display how the application works:

Fig. 3. Two Silverlight applications are started, connected to the service and ready to subscribe to notifications.

Clients are ready to subscribe to notifications

Fig. 4. Two Silverlight applications are subscribed to notifications and receive information about the insert operation.

Clients receive information about insert operation

Fig. 5. Two Silverlight applications receive information about the update operation.

Clients receive information about update operation

Fig. 6. One Silverlight application is disconnected, a second one is connected and receives information about the delete operation.

One client is disconnected, Another one receives information about delete operation

The following SQL scripts were used for database updating:

 Collapse | Copy Code
insert into AppUser Values ('Test User 1', '123456', 'test1@test.test')
insert into AppUser Values ('Test User 2', '654321', '')
-------
update AppUser set email = 'test2@test.test' where Name = 'Test User 2'
-------
delete from AppUser

Console application

The application DuplexSample.ConsoleApp was added to the solutions just to have the ability to emulate the CLR trigger. This application connects to DBAuditService and sends test data. These data come to the web server and the last one sends them to each subscribed client.

Referenced by: http://www.codeproject.com/Articles/45161/Notification-of-Silverlight-Applications-about-Dat

 
Leave a comment

Posted by on April 10, 2012 in Silverlight

 

How to show headers in each page of the report in Microsoft Report Viewer

By RoboLover

image

In Row Groups below the main page of the report, there is a line “Static” writing on it. It is a tablix member and to repeat the headers use below attributes

RepeatOnNewPage = True and KeepWithGroup = After

 

Referenced by: http://stackoverflow.com/questions/5469292/how-to-show-headers-in-each-page-of-the-report-in-microsoft-report-viewer

 
Leave a comment

Posted by on April 9, 2012 in Microsoft Report

 

[Q] Kindle Fire Stuck in Yellow Triangle After Attempted Root

Try this:

1) Download v6 of the Kindle Fire Utility from here. (The forum thread can be found here)
2) Extract the files into a folder anywhere (but remember where, obviously)
3) Go into the folder, then into ‘drivers’ and open up adb_usb.ini in a text editor and add “0x18D1” (without the quotes) on a line underneath “0x1949”
4) Open up Device Manager and find “kindle”. It should have a yellow triangle near it.
5) Download fastboot.exe from here
6) Open up command prompt and navigate to wherever you saved fastboot to.
7) Run the command “fastboot oem idme bootmode 4000”
8) Power off your device by holding down the power button for ~10 seconds and turn it back on, and this time don’t hit the power button to go into TWRP. It should move past the yellow triangle after a few seconds and boot into Kindle. Hope this helps!

I tried this and it won’t work, what am I doing wrong  ?
Why is there a step 4? Is it only there so I check if kindle shows up or do I have to update its driver so there is no yellow triangle anymore?

EDIT:
So I managed to get it running. What I did is I added the “0x18D1” as in step 3 but after that I opened my device manager and by clicking on kindle (that has yellow triangle next to it) I updated its driver. I chose the location of adb_usb.ini that I edited. After that I ran Kindle Fire Utility and chose option 1 (Boot mode) and in the following menu I tried all of the options. In the option 3 (Recovery) it did something and now every time I boot kindle it goes to the yellow triangle but by pressing power button once it starts normal boot.

Ok, but still I would like to get it completely back to normal if possible so if anyone can help, thanks

 

Referenced by: http://forum.xda-developers.com/showthread.php?p=23038503

 
2 Comments

Posted by on April 4, 2012 in Uncategorized