Send cookie with Angular request

April 9, 2018

To enable an Angular request sending cookies to the server:

$httpProvider.defaults.withCredentials = true;



July 14, 2017


MEX and WSDL are two different schemes to tell potential clients about the structure of your service. So you can choose to either make your service contracts public as (MEX) or WSDL.

A WSDL is generally exposed through http or https get urls that you can’t really configure (say for security limitations or for backward compatibility). MEX endpoints expose metadata over configurable endpoints, and can use different types of transports, such as TCP or HTTP, and different types of security mechanisms.

Visual Studio remove references count (Codelens)

July 5, 2017

Remove/Set the codelens in VS editor

Tools -> Options -> Text Editor -> All Languages -> CodeLens -> check/uncheck

Manage Certificates

October 26, 2016

The tools to manage certificates on your computer

for current user store: certmgr.msc

for local computer store: certlm.msc

or use mmc.exe -> File -> Add/Remove Snap-in -> Certificates
and you can choose the store

Restore NuGet Packages

September 28, 2016

If Solution -> Restore NuGet Packages doesn’t work
delete the packages folder and try again and then rebuild solution.
That should fix it.


Customize Windows Boot menu

July 22, 2016

Customize Windows Boot menu
(the boot.ini file)

Use the bcdedit tool in the command prompt

bcdedit /?

bcdedit /displayorder {key1} {key2}

bcdedit /set {key} parameter “value”


How to use __doPostBack

February 5, 2015


<script type=“text/javascript”>
function SaveWithParameter(parameter){
   __doPostBack(‘btnSave’, parameter)


public void Page_Load(object sender, EventArgs e)
string sender = Request[“__EVENTTARGET”]; // btnSave
string eventarg = Request[“__EVENTARGUMENT”]; //parameter

Call ASP.NET codebehind from javascript (PageMethod)

November 17, 2014
The first thing to do is to add a ScriptManager to our page,
and tell it to accept page Methods.
<asp:ScriptManager ID="ScriptManager1" runat="server"

Next add the codebehind method to call, it must be static and have
the WebMethod attribute

public static void SetDimensions(int width, int height)
    //Method code

Finally call the method using PageMethods 

<script type="text/javascript">
    function setDimensions() {
        PageMethods.SetDimensions(width, height);

WCF service for AJAX

August 6, 2014

WCF service for client AJAX call returning JSON

[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public class Service : IService
// Your code comes here

public interface IService
[WebInvoke(Method = GET”,
ResponseFormat = WebMessageFormat.Json)]
string GetData(int value);

[WebInvoke(Method = POST”,
BodyStyle = WebMessageBodyStyle.Wrapped,
ResponseFormat = WebMessageFormat.Json)]
string[] GetUser(string Id);

<behavior name=”ServiceBehavior”>
<serviceMetadata httpGetEnabled=”true”/>
<serviceDebug includeExceptionDetailInFaults=”true”/>
<behavior name=”EndpBehavior”>
<service behaviorConfiguration=”ServiceBehavior” name=”Service”>
<endpoint address=” binding=”webHttpBinding”
contract=”IService” behaviorConfiguration=”EndpBehavior”/>

Client code with JQuery:

function CallService() {
type: POST”, //GET or POST or PUT or DELETE verb
url: Service.svc/GetUser”, // Location of the service
data: {“Id”: “‘ + userid + “}’, //Data sent to server
contentType: application/json; charset=utf-8″, // content type sent to server
dataType: json”, //Expected data format from server
processdata: ProcessData, //True or False
success: function(msg) {ServiceSucceeded(msg);},
error: ServiceFailed// When Service call fails

function ServiceSucceeded(result) {
if (DataType == json”) {
resultObject = result.GetUserResult;

for (i = 0; i < resultObject.length; i++) {


function ServiceFailed(result) {
alert(Service call failed: ‘ + result.status + + result.statusText);
Type = null;
varUrl = null;
Data = null;
ContentType = null;
DataType = null;
ProcessData = null;

Archiving data in DB

April 30, 2014

Two common possibilities:

  • Storing the archive data in the same table as the current data using a compound primary key (entity ID and version ID).
    • You waste the power of compound primary keys (for example, for index organized tables) because you are forced to always include the version ID as the second key.
    • The approach will result in very large tables of which only a fraction of the rows represent current data.
    • Joins are more complicated and it’s easy to make mistakes, if you forget to filter out archive data.
    • Archive data usually doesn’t need as much indexing as production data, but storing the data in the same table applies the index to archive data as well. Your indexes will grow large and will contain a lot of similar data from the archive, thus additionally slowing down performance.
    • While it might be possible to let the DBMS automatically create archive rows using ON UPDATE and ON DELETE triggers, it’s probably complicated (I’ve never seen that anywhere). Otherwise you are forced to implement the archive operation in your application code, thus slowing down performance once more, making it impossible to directly modify the database or to use more than one client application (unless you invest a lot of time and money to keep them in sync).
  • Using a dedicated archive table for each production table and create a copy of the production data before each update/delete (easy to implement with ON UPDATE and ON DELETE triggers). The archive table has the same structure as the production table plus a column for the archive id (auto-increment).
    • This option enables you to completely implement historization in your DBMS using triggers. You can enable/disable historization for single tables at any time without modifying your client applications.
    • You can use as many client applications as you like or even manually modify rows without losing the archive functionality.
    • You can use different indexes for archive and production data, depending on the actual needs of your applications.
    • If the schema changes, you can also update your archive table using standard SQL statements (the same that work for the production table). This is a minor drawback, because you must not forget to do so. It’s still better than having a huge archive of data that can not be easily compared.
    • The read performance of your production tables is not reduced. Updates and deletes might take a bit longer, but still much faster than “manually” creating archive copies. It should be comparable to the performance of the built-in historization feature of your favorite database.