Oct 272013
 
Datastore Name Couchbase
Description Couchbase Server makes it easy to modify your applications without the constraints of a fixed database schema. Submillisecond, high-throughput reads and writes give you consistent high performance. Couchbase Server is easy to scale out, and supports topology changes with no downtime
Company Couchbase
Licensing Apache Public License ,subscription based support
Written in C,Erlang
Protocol Memcached,REST
Query methods/options Incremental Map Reduce,sql like U1QL ( developer preview)
Family Document Store / Key-Vallue Store
API/Drivers Java, C#, PHP, C, Python,Node/JS Ruby,Go
Links http://www.couchbase.com
Who uses http://www.couchbase.com/customer-stories
Important Features (High Availability, Scalability,Consistency , …)
  • flexible data model(shema-free)
  • easy scalability
  • consistent high performance
  • always on 24×365
  • auto-sharding
  • cross-cluster replication
  • built-In Object-Level Cache
  • masterless data Replication with auto-failover
  • Management & Monitoring UI
  • Reliable Storage Architecture

Couchbase Server is a distributed, document (“NoSQL”) database management system, designed to store the information for web applications. Couchbase Server provides a managed in-memory caching tier, so that it supports very fast create, store, update and retrieval operations.

PROS:

First of all , I must say that I am very impressed by its performance during my tests.On a 3 nodes cluster you can see easily 10-20K ops/sec.

Well, I will not publish any performance benchmark result.But you can create one of your own.Just visit https://github.com/couchbaselabs and you can find necessary tools and source codes.

Documentation is very good.You do not get lost in the site.

In production , knowing the limits is the key of success and Couchbase let’s you know its limits.(http://docs.couchbase.com/couchbase-manual-2.2/#appendix-limits)

When I first met Couchbase , I had some doubts because Couchbase uses Couchbd as a storage back-end .If your data changes so often in Couchdb , soon your cluster will suffer  storage problems.

But bucket compaction works so good and I am sure it will be much better in future.On thing you must consider, if your bucket gets a lot of crud operations , it will be better schedule the compaction.There are some other tricks as well to be considered. ( http://www.couchbase.com/docs/couchbase-manual-2.0/couchbase-admin-web-console-settings-autocompaction.html )

Couchbase is really user/developer/admin friendly. You can see easily what’s going on on your cluster by using web console.When things get wrong , web console is a huge advantage.

The same web console leads you during the installation and while managing your cluster.adding new nodes or removing nodes are one click operations.

You can configure alerts,auto-failover and some of the bucket properties easily trough web-console.If you want you can use the command line interfaces locally or via ssh.

Command line interfaces have really hidden gems for administration. ( http://docs.couchbase.com/couchbase-manual-2.1/#administration-tasks ) data transfer , back-up and restore operations are very easy.

If you have already worked with MongoDb,I am sure you already know that configuring your cluster(sharding,replica-sets,etc..) can be very painful.You do not experience the same in Couchbase.

Incremental map/reduce views save a lot of time if you would like to build  real time/near real time aggregations.You can create secondary indexes by views and even,you can query your views by ranges.

Do not try to do a lot of things in one view.Try to keep your views as simple as possible.For example , if you need full-text-search or text based search , just DON’T try.Instead, you can use couchbase – elasticsearch plugin.

Most used objects are kept in the ram.If you have lots of data with long item keys and meta-datas , you may need larger cluster.

Community gets larger day by day.

You can always see the issues and road-map on their JIRA.

CONS:

Sometimes,re-reduce can be confusing for developers.Before start developing , understand the incremental map/reduce views and how to implement re-reduce.

If you are accustomed to sql based systems or MongoDb , you can feel that query options are limited in Couchbase and you can feel it’s not possible to write ad-hoc queries.

But do not worry there is a solution, named U1QL . You can see the latest commits on github repo.  Even if it’s still heavy development , you can save a lot of time by using it.

It is not possible to create a new bucket during rebalance.

BEST FIT: Distributed persistent caching layer, session store , time series , job distribution system , simple data distribution system

FINAL THOUGHTS:

If you need low-latency and high concurrent apps with simple query requirements with easy installation and management , Couchbase is your best friend.

Once you taste Couchbase , you will look other datastores with different eye.

P.S: Please let me know if something is missed or misfit

 

 

 

 

 

Dec 022011
 

Microsoft Dynamics CRM SDK Version 5.0.8 released. You can download from MSDN library or  MSDN downloads

Updates in the SDK :

·         Updated binaries for the Portal developer toolkit and developer extensions. New features include support for authenticating portal users through the Windows Azure Access Control Service.

·         A new section describing developer integration and access to Microsoft Office 365.

·         Updated helper code for authentication to support more scenarios.

·         A new topic providing guidance for creating solution components that support multiple languages.

·         New samples for auditing, entity serialization, bulk delete and more.

·         See the complete list of new and updated samples and links to all  the new topics in the Release History on the first page.

 

Nov 132011
 

You may have noted that Microsoft Dynamics CRM Sustained Engineering has released fixes for Internet Explorer-related memory leaks affecting our Microsoft CRM 2011 web client.  I’ve noticed that occasionally IE clients may still leak memory, eventually resulting in slower page load times or Out Of Memory errors, even though the latest CRM 2011 Update Rollup has been installed on your CRM servers.

Please note: Windows client servicing, via Windows Update or WSUS (Windows Server Update Services) + SCCM (System Center Configuration Manager) is also vital.  We’ve identified a fix that can potentially help with Internet Explorer memory leaks.  This fix was pushed through Windows Update via MS10-090: Cumulative security update for Internet Explorer.  It was released earlier this year and is likely already on your client machines.  HOWEVER: the fix I’m referencing here is registry enabled, so to receive the benefits of this fix, you need to add the registry keys mentioned in the associated Microsoft Knowledge Base article:

KB 975736: A memory leak occurs when you open a Web page that contains a circular reference in an iframe

The registry changes needed to activate the fix are described in this Knowledge Base article, and for large enterprises, these registry keys are generally pushed by GPO (Group Policy Objects) or other means.

In any case, I heartily recommend that you assure MS 10-090 is installed on your client machines and the KB 975736 fix enabled if you are still experiencing Internet Explorer-related memory leaks when the latest CRM 2011 Server Update Rollup (as of this writing, Update Rollup 5) is installed.

 

http://blogs.msdn.com/b/crminthefield/archive/2011/11/11/microsoft-dynamics-crm-2011-related-microsoft-windows-internet-explorer-memory-leaks-and-windows-microsoft-update.aspx

 

 

Nov 052011
 

While I was importing one crm db from 4.0 in 2011 , I was getting error while publishing the reports and import progress stops itself.

After some investigation in error details , I realized there are custom reports in crm 4.0 db.

I had to take the backup the reports ( I assume you know how to do it by editing report in crm)

and ran the query below in crm 4.0 organization db. ( myorgname_MSCRM )

DELETE FROM [enterkon_MSCRM].[dbo].[ReportBase]      WHERE SignatureId is null  GO

So , this query deletes all custom reports and versions.

And finally you can import the db from 4.0 in to 2011 successfuly.

PS : the query is an UNSUPPORTED SOLUTION. Never forget to backup your db and reports.

 

Nov 022011
 

Open a Windows PowerShell window with Administrative Rights.

PS > Add-PSSnapin Microsoft.Adfs.PowerShell

PS > Get-ADFSRelyingPartyTrust -Name:”relying_party“

PS > Set-ADFSRelyingPartyTrust -TargetName “your_relying_party_name” -TokenLifetime 7200

So this code set the session timeout as 12 Hours (7200 minutes )

Don’t forget to replace your_relying_party_name with the real name in your AD FS 2 Manager –>Relying Party Trusts

Nov 022011
 

SQL server reporting services can be configured in two modes

  • Native mode
  • SharePoint Integration mode

When reporting services are running in native mode, the reports will store on SQL server reporting server. When user sends a request for report the response comes from Report Server in SQL Server.

When reporting services running in SharePoint integration mode, SharePoint gets the reports from server via web service to end users.

Installation Instructions

SQL Server 2008 R2 Installation instructions to enable reporting services in SharePoint integration mode.

1. Select the SQL Server Feature Installation option from SQL Server 2008 R2 setup page

2. On the features selection page, Select Database Engine Services,Reporting Services then click Next

3. In Server Configuration page enter appropriate credentials and click Next

Note: SharePoint 2010 products in farm configuration require domain accounts     for service configurations.

4. On Reporting Services Configuration page, select Install SharePoint integrated mode default configuration and say finish.

Configuration steps on Report Server

After Setup is finished, you can verify that installation by connecting to the report server. You can test by browsing to /reportserver”>http://<your server name>/reportserver

1. Open the Reporting Services Configuration Manager from Configuration tools

image

2. Select the report server instance that you installed in SharePoint Integration mode

image

3. Configure the web service URL and test whether it is running fine or not. We are going to use this url in SharePoint Administration tool

image

4. Make sure your database is using SharePoint Integration Mode

image

Configuration Steps on SharePoint Server

if the reporting service for sharepoint add-in is not installed on your server ,plz download and install it here ( http://www.microsoft.com/download/en/details.aspx?id=622 )

1. Open the SharePoint Central Administration toll and the click General Application Settings

image

2. Click on Reporting Services Integration link under Reporting Services.

Enter the Reporting server web services URL that you configured earlier in the  Web Services URL box. Enter the authentication mode that you are using and credentials in the respective box then click ok

image

3. Upload the rdl files (SQL report files) to document library in SharePoint. If the configuration is successful then you should be able to see the SQL server Report in SharePoint 2010 as follows

image

 

resource

Sep 232011
 

While configuring Claims-based authentication in Deployment Manager , error message box appeared.It may because of the length of SSL Cetificate name.If it’s longer than 128 chars , the Deployment Manager can not write it to the MSCRM_Config database.
Here is the solution and worked for me.But you should know that the solution is unsupported :
Run the query for MSCRM_CONFIG database .This query updates the Certificates table and its schema:

ALTER TABLE Certificates ALTER COLUMN Name NVARCHAR(256);
UPDATE   MSCRM_CONFIG.dbo.ConfigurationMetadata

SET   ConfigurationMetadataXml =     REPLACE(      CAST(ConfigurationMetadataXml AS NVARCHAR(MAX)),

‘<Column Name=”Name”><Description>Name of the Certificate</Description><Type>nvarchar</Type><Length>128</Length>’,

‘<Column Name=”Name”><Description>Name of the Certificate</Description><Type>nvarchar</Type><Length>256</Length>’     );