Monday, August 8, 2011

Updated sample deployment files with FAST Search for SharePoint SP1

With service pack 1 (or the June 2011 CU) the sample deployment files included with FAST for SharePoint has been updated. The single server sample is still the same, as there is not much room for changing where you put components, but both the multi-server samples have some changes.
The changes align better with recommend best practice according to the number of nodes used in the samples. Depending on your requirements you can use the provided samples as starting points and modifying them to your needs.

deployment.sample.multi1.xml

This is a sample configuration for a two server deployment with search redundancy.

Removed failover on indexing

The second host has removed <content-distributor> and <indexing-dispatcher> nodes. In a two node system it is better to have a performant system for searching, letting one of the nodes be a pure search server. The configuration still has failover for searching with two search rows.


The number of document processors is increased from 4 to 8

The recommended setup is one document processor per CPU core, and many servers today come with 8 cores (single quad-core cpu with hyper-threading = 8 visible cores in the OS).


The Enterprise Web Crawler has been removed from the configuration

For most scenarios you will use the web crawler in SharePoint instead of the FAST Enterprise Web Crawler, so there is no need to include it in the deployment configuration.


Max-targets for the webanalyzer has been increased from 2 to 4

Max targets specify the number of CPU's the web analyzer utilizes, and aligns better with the number of cpu’s on a typical server.


Removed failover webanalyzer component

By removing the attribute redundant-lookup="true" from the <webanalyzer> node on the first host, and removing the <webanalyzer> node entirely from the second host, failover of the webanalyzer is removed from the sample configuration. In a two server setup failover on this component is not necessary. If the first host which is responsible for indexing dies, indexing will stop working, and the webanalyzer is dependent on indexing in order to get data to work with.

deployment.sample.multi2.xml

This is a sample configuration for a five server deployment with a separate admin server, search redundancy, feeding redundancy and indexing redundancy. The updated sample has removed redundancy on the web analyzer component.

If a server performing database lookup fails for link analysis, and there is no redundancy, this will block crawling.


Admin server changes

The admin server used to have the Enterprise Web Crawler, but that is now replaced with the web analyzer and 8 document processors.


First indexer node

The web analyzer is moved over to the admin node, and the number of document processors has been increased from 4 to 8.


Second indexer node

Number of document processors increased from 4 to 8, and the failover web analyzer component is removed from the server.


Search nodes

The search node configurations are left as is, only serving search requests.

24 comments:

  1. Great post! Just one question in regards to the 2 node example. In the element are you setting the secondary node to "none" ? I am confused on whether this would be needed or not since node1 is the indexer and node2 is for search. Since you mentioned that you removed the fail-over capacity does that also mean that you removed the index from being copied over to node2? thanks.

    ReplyDelete
    Replies
    1. Hi,
      I'm referring to the changes of the out of the box config in RTM and SP1. And node2 will have a copy of the index for search. It just won't be able to act as a backup indexer node.

      Hope this clarifies it.
      Thanks.
      Mikael

      Delete
  2. Mikael,
    Thank you for your response, it helped to clarify a few things. I have another question for you. If I would like to isolate node1 as just an index server and node2 as just a search server, would I need the "searchengine" and "query" roles to be on node1 or should they just reside on node2?

    any tips to improve performance on the below configuration would be greatly appreciated!

    here are some details about my environment:
    2 virtual servers running windows server 2008 R2 x64 SP1, 16GB RAM, 4 cores on each server.

    on node1 which I would like to make an index server I have these elements in my deployment.xml file:
    "admin"
    "document-processor processes=4"
    "content-distributor id=0"
    "indexing-dispatcher"
    "webanalyzer server=true max-targets=4 link-processing=true lookup-db=true redundant-lookup=false"
    "searchengine row=0 column=0"
    "query"

    on node2 which I would like to make just a search server I have the following:
    "document-processor processes=4"
    "searchengine row=1 column=0"
    "query"

    then for the "searchcluster" element I left it with the default settings as you suggested. so I have:
    "row id =0 index=primary search =true"
    "row id =1 index=secondary search =true"

    I appreciate any help you would be able to give me. thank you.

    ReplyDelete
  3. Hi,
    the node is needed to store the index, so you want that on both servers- . Meaning you have a one column, two row setup. You can then remove query from node one, and only the second one will perform queries.

    If you take a look at c:\fastsearch\etc\deployment.sample.multi2.xml this is the similar setup but with more servers. It has two servers used for indexing, and two used for searching - also providing failover on the index itself. You can use multi2 as a sample, and remove the config for server3 and server5 (more or less), as well as move the admin stuff to one of your nodes.

    Thanks,
    Mikael

    ReplyDelete
    Replies
    1. Hi Mikael,
      As given suggestion I configured SharePoint and FAST farms.
      My farm structure is

      Sharepoint: 2 WFE’s and 2 Application Servers.
      1 App server ( FAST_Content_SSA)
      2 App Server ( Fast_Query_SSA)

      Fast farm:

      1 admin server ( web analyser, content distributor, doc-processor)
      2 index servers, and 2 backup servers for indexing ( index dispatacher)
      2 query servers and 2 backup servers for querying ( query processing)

      I have configured SharePoint and FAST farms in my acceptance environment. SharePoint side everything working fine. Means, farm configured properly and services running as expected.
      FAST farm also configured as I mentioned above.
      Now I am trying to establish connection between SharePoint and FAST farm.
      For this I installed FAST certificate into App Server 1 and App server 2 (into MMC personal store).
      SharePoint certificate installed into admin server.

      When I am trying to crawl the content of SharePoint, it’s not indexing and it’s not showing any error for this.

      Please suggest me on this issue what could be the problem.

      Delete
    2. Hi,
      Most likely a certificate or firewall issue. Did you use the self-signed cert or did you issue one yourself from a cert server? And did you follow all the installation points to very detail for multi-server deployment? :)

      -m

      Delete
    3. Thanks for reply. I have copied fast certificate into web server or app server using power shell and checked firewalls as well it was disabled. I am using self-signed certificate for SSL enabled. I followed all the installation procedure which I have there in installation guide. Here what I am facing issue is. If I run my content SSA into Web server where central administration is there, then it’s working. Means indexing is working. If I move content ssa into App server1 using modify topology, then my content is not indexing. here I have doubt like we can't move service using topology or any permission issue or we need to create content ssa using power shell in app server.

      Delete
    4. Hi,
      If you follow http://technet.microsoft.com/en-us/library/ff599534(v=office.14).aspx with adding a crawl component to the apps erver (which is what I assume you are trying), and then remove the crawl component from CA, then it should work.

      Make note of the descriptions of granting access rights to the cert for the OSearch14 account (using the Cert MMC snap-in)

      Delete
    5. Hi Mikael, Yes I have configured as you suggested. but crawl is going infinite loop. keep on crawling and also I have seen error in evnent viewer like "Failed to connect to Server:13391 Failed to initialize session with document engine: Unable to resolve Contentdistributor and An operation failed because the following certificate has validation errors:\n\nSubject Name: CN=SharePoint Services, OU=SharePoint, O=Microsoft, C=US\nIssuer Name: CN=SharePoint Root Authority, OU=SharePoint, O=Microsoft, C=US\nThumbprint: 476BE8BEC0796031A4449B6D6375CB57DD107711\n\nErrors:\n\n SSL policy errors have been encountered. Error code '0x6'.. errors. Please suggest me on this.

      Delete
    6. Hi,
      I can only recommend redoing the certification steps on all servers. Be sure to remove any old certificate first using the MMS cert snap-in. And you can use Ping-SPEnterpriseSearchContentService to check if it's working once done.

      -m

      Delete
    7. Thanks for your help. I will try this once into my environment and check. Thank you very much for your valuable time spent on this query.It's really helpful for my experience as well.

      Delete
  4. Hi Mikael, I getting error while configuring deployment.xml file in fast search 2010.
    I am getting error like "Only 1 webanalyzer can have attribute "server" set to true". what was the problem with this. Here is my configuration:

    -





    ReplyDelete
    Replies
    1. Hi,
      I cannot see your config, but only the admin component of the webanalyzer should have server=true. If you run a distributed webanalyzer component, the others should have server=false. The specification on deployment.xml can be found at http://technet.microsoft.com/en-us/library/ff354931(v=office.14).aspx#element_webanalyser

      Delete
    2. Thank you very much for your help. It's helped me.

      Delete
  5. Hi Mikael,


    We have a requirement like create refinements (facets) with tree view structure using SharePoint web service call.

    What we have configured.

    1. Data is coming from database
    2. Using BCS connector to index the data into FAST.
    3. Create managed properties for bcs crawled properties ( database columns which are required) to make refinement.

    Data:

    Requirement:
    Drill down the tree view strcuture and based on selection of refinement, query the results and display it with count (in each tree view node, example: Samsung (10), category(7) )..


    We have 7 levels of hierarchy.

    Level 1
    Level2
    Level3
    Level4
    Level 5 and etc.,

    Data in Database:
    Around 20000 records in the database. Each column has 20000 records.

    Our Approach:

    We have choose using SharePoint search web service pull all information in format of xml.
    Parse the xml and display the tree view kind of refinement according to the selection.

    Question: Is that right approach to do it. If XML is huge is that going to be any performance issue.



    Please suggest me is there a way to achieve with simpler and with out impact performance issue.

    It will really appreciate if you could provide approach for this.

    ReplyDelete
    Replies
    1. Hi,
      Using the search web service to get the refinements when outside of SharePoint is basically your only option if you are doing calls outside of SharePoint. As for the size of the xml, that is dependent on how many refiners you are returning. Are you returning all 20.000? If the data is not security trimmed, you should also implement a cache in your client app.

      Depending on your client app, it's the deserialization of the xml which takes the time. Especially for large results ets with deep hierarchies. You can speed this up quite a lof by "hand" parsing the xml. But don't do that before you see it's an issue.

      Your other option is to create a custom service in SharePoint which sends you data in another form which might be more efficient for you. But remember that the data is sent from FAST as XML to SharePoint, then parsed and rewritten via the SP search object's. So, there are many steps and transformations involved already.

      Hope this helps.

      Thanks,
      Mikael

      Delete
    2. Thanks Mikael. Now I got some idea what I needs to do.thanks for reply. I have one final query here. In general FQL Query will accept 2058 characters in the request. I have query which will cross more than 2058 characters. In this case how I can send request.

      Your help is really appreciated.

      Thank you very much.

      Delete
    3. Hi,
      Unless Microsoft release a fix/patch, you cannot exceed this limit. The fix is really a simple one if they bother to do it.

      If they in the Query Proxy service change from GET to POST when sending the query further to the FAST QR server, any length should work just fine. I have previously "hacked" this to test and it works just fine.

      You should file a service request with Microsoft and petition that they fix this.

      Thanks,
      Mikael

      Delete
  6. Hi Mikael,
    I have one query regarding sharepoint multilingual stuff.

    We have a SharePoint document management site and the documents will be uploaded from java portal, via SharePoint web services.

    The metadata values will be provided multi language (Like Spanish).

    Is it safe to accept and store such data in SharePoint as SQL server uses unique code format to store the SharePoint data in SQL server.

    If storing the multilingual data in SharePoint is not right approach, is there any other way we can do it.

    Please let us know your views if possible.

    ReplyDelete
    Replies
    1. Hi,
      Documents are stored as blobs, and the content inside does not matter on your SQL settings. You can store documents in any language you like in SharePoint.

      Thanks,
      Mikael

      Delete
    2. Hi Mikael,

      Thanks for the quick response.

      Is it applicable for document metadata also ?

      Please confirm

      Regards
      Guru

      Delete
    3. Hi,
      Yes. Characters are stored in unicode so you can store anything :)

      Delete