Aura’s Helper Equivalent in LWC

Back in time when we were creating Lightning components on aura framework, developers were hooked on with the helper methods. We all were told that all the reusable code should go into helper methods. However when we moved to LWC development, with no helper javscript file, where do we put all these reusable code? Let is take a look.

Reusable helpers in LWC

In LWC, we just have one javascript file, so it is necessary to have all the reusable code written within this file. Lets take a quick example of how we can call a method that fires toast message. This method we’ll make generic and try to refer from multiple places. Also I’ve added how to use a pattern matching method as well.

import { LightningElement} from 'lwc';
import { ShowToastEvent } from 'lightning/platformShowToastEvent';

export default class MyClass extends LightningElement {
    searchTerm = null;
    //Function for maanging toast messages
    showToast(titleMsg,description,variant) {
        const event = new ShowToastEvent({
            title: titleMsg,
            message: description,
            variant : variant
        });
        this.dispatchEvent(event);
    }
    checkNumeric(inputStr){
        let pattern = /^([^0-9]*)$/;
        if(!inputStr.match(pattern)){
            return true;
        }
        else {
            return false;
        }
    }
    validateParam(searchTerm) {
        let searchTermLen = searchTerm.length;
        if (searchTermLen === 0 || searchTermLen > 3) {
            this.showToast('Error','Search Term from must have minimum 1 and maximum 3 characters', 'error');
        }
        else if (searchTermLen === 1) {
            if(this.checkNumeric(searchTerm)){
                //Some Logic
            }
            else {
                this.showToast('Error','First Character of Search starts with Number', 'error');
            }
        }
    }
    //The handler that will executed from the click of search button
    handleSearchClick(event) {
        this.validateParam(this.searchTerm);
            // Do all your logic below
        }
}

From the above its evident that on handleSearchClick(), we call the validateParam() method. From the validateParam() method we are calling the showToast() method by passing parameters. The showToast() method based on its input parameter will render the toast message on the UI.

‘This’ – The magic word!

The key to call reusable code is the use of ‘this‘ keyword. Functions, in JavaScript, are essentially objects. Like objects they can be assigned to variables, passed to other functions and returned from functions. And much like objects, they have their own properties. One of these properties is ‘this‘.

The value that ‘this‘ stores is the current execution context of the JavaScript program. Thus, when used inside a function this‘s value will change depending on how that function is defined, how it is invoked and the default execution context.

Open Lightning Component as Tab from Quick Action

Most of us might have opened a lightning component from a quick action button by embedding the component in the quick action. Its a nice feature that helped us to pop up UI elements from a record page. However the component was appearing in a modal. In this blog, lets try and see how we can manage to show the component in a new tab.

Text Book Lessons

We would be using a quick action from case object. The reason I’ve chosen case object is because, few objects viz., case, user profiles and work order objects, if feed tracking is enabled, quick actions appear as chatter tab. So the first task is to disable the feed tracking on case object.

The next theory that we try to understand is regarding the isUrlAddressable component. This helps you to enables you to generate a user-friendly URL for a Lightning component with the pattern /cmp/componentName instead of the base-64 encoded URL you get with the deprecated force:navigateToComponent event. If you’re currently using the force:navigateToComponent event, you can provide backward compatibility for bookmarked links by redirecting requests to a component that uses lightning:isUrlAddressable.

Finally we need to understand lightning:navigation. This component helps to navigate to a given pageReference or to generate a URL from a pageReference.

The solution

Lets look at how this works. We create a Lightning Component (LC) that uses the lightning:navigation command to create a url from the pageReference variable. The pageReference variable defined on the controller will hold the name of the LC that needs to be opened in a new tab and also any parameters that we need to append to the URL (usually record Id). We need to use pageReference Type as ‘Lightning Component’.

QuickActionComponent.cmp

<!-- Component used on the Quick Action -->
<aura:component implements="force:lightningQuickAction, force:hasRecordId" >
    <lightning:navigation aura:id="navService"/>
    <aura:attribute name="pageReference" type="Object"/>
	<aura:handler name="init" action="{!c.navigateToLC}" value="{!this}" />
    Record Id:::: {!v.recordId}
</aura:component>

QuickActionComponent.js

({
    navigateToLC : function(component, event, helper) {
        var pageReference = {
            type: 'standard__component',
            attributes: {
                componentName: 'c__TabComponent'
            },
            state: {
                c__refRecordId: component.get("v.recordId")
            }
        };
        component.set("v.pageReference", pageReference);
        const navService = component.find('navService');
        const pageRef = component.get('v.pageReference');
        const handleUrl = (url) => {
            window.open(url);
        };
        const handleError = (error) => {
            console.log(error);
        };
        navService.generateUrl(pageRef).then(handleUrl, handleError);
    } 
})

TabComponent.cmp

<!-- Component that is opened in a new tab.-->
<aura:component implements="lightning:isUrlAddressable">
    <aura:attribute name="refRecordId" type="String" />
    <aura:handler name="init" value="{!this}" action="{!c.init}" />
    
    <div class="slds-box slds-theme_default">
        <p>This component has been opened from QuickAction button from a record with Id : {!v.refRecordId} as a tab.</p>
    </div>
    
</aura:component>

TabComponent.js

({
	init : function(component, event, helper) {
		var pageReference = component.get("v.pageReference");
        component.set("v.refRecordId", pageReference.state.c__refRecordId);
	}
})

All set. Lets click on the quick action. You can see the lightning component opened in a new tab, the url has parameter of the case record Id and its displayed on the component.

SFDC ANT Deployments using Azure Pipelines

Introduction

Azure DevOps is a very powerful application that has a git-based repo and pipelines to automate tasks that is relevant to any IT process. In this blog, we are diving into the use of Azure Pipelines for Salesforce Developers to carry out deployment activities. You can use this blog to understand how you could push your metadata in your repo into a salesforce org. This blog uses Azure Repo to store the metadata and ANT based deployments. You could also use Github/Bitbucket repo to link into Azure pipeline and run the deployments from there as well. Instead of ANT based deployments, sfdx can also be leveraged, but that might need different pipeline tasks to support it. So let us get started to retrieve and deploy the metadata using ADO pipelines.

Prerequisites

We’ll start with the assumption that you are having good experience with Salesforce ANT Migration Tool because the Azure Pipeline that we’ll build will use this migration tool at the backend. You can start by cloning this repo from my Github here. Sign up for an Azure developer edition to try this out from you personal azure dev org. You could also if your project (at work) allows, create a new repo or branch within the repo with the files from Github and set up the pipeline to do the deployments.

Process Diagram

From the above diagram, one can understand how the pipeline is configured to run. These are the steps/tasks in the pipeline file. Let us see what each step does:

  1. Checkout from the repo: This task checkouts the entire repo into the virtual machine environment. (Azure Pipelines works on a virtual machine)
  2. Run ANT Scripts. This is a standard pipeline task that is created to work in the following way:
    1. Retrieve – to just retrieve from you source org.
    2. Deploy – to deploy the metadata from the repo to the target org.
    3. Both – to do a retrieve and deployment with a single run of pipeline.
  3. Push to Repo: This task commits and pushes the files retrieved from source org to the repo from the local of the Azure virtual machine.

The pipeline that I’ve created here is a dynamic one that accepts the ANT target from a variable that user can set just before running the pipeline so that he can choose between retrieve/deploy/both. Now if you are good with the build.xml, you know with multiple targets that use a different set of username/password or by using multiple pipelines linked to each other, you can automate the complete deployment process of retrieve and deploy from one org to another. In the example I’ve in my GitHub, you many notice, am retrieving from and deploying to the same org. I have explained how this could be done between orgs in the video.

The Master File

Now lets look at the pipleine file in detail. I’ve explained the below yml file in detail in the video.

# SFDC Retrieve and Deploy sample

trigger: none

pool:
  vmImage: 'ubuntu-latest'

steps:
- checkout: self

- script: |
    echo "Build Type: $(BUILD_TYPE)"
  displayName: 'Confirming Variables'

- task: Ant@1
  inputs:
    buildFile: 'MyDevWorks/build.xml'
    options: 
    targets: 'retrieve'
    publishJUnitResults: false
    javaHomeOption: 'JDKVersion'
  displayName: 'Retrieving from Source Org'
  condition: or(eq(variables['BUILD_TYPE'], 'retrieve'),eq(variables['BUILD_TYPE'], 'both'))

- task: Ant@1
  inputs:
    buildFile: 'MyDevWorks/build.xml'
    options: 
    targets: 'deployCode'
    publishJUnitResults: false
    javaHomeOption: 'JDKVersion'
  displayName: 'Deploy to Target Org'
  condition: or(eq(variables['BUILD_TYPE'], 'deploy'),eq(variables['BUILD_TYPE'], 'both'))

- script: |
    echo ***All Extract Successfull!!!***
    echo ***Starting copying from VM to Repo****
    git config --local user.email "myemail@gmail.com"
    git config --local user.name "Rohit"
    git config --local http.extraheader "AUTHORIZATION: bearer $(System.AccessToken)"
    git add --all
    git commit -m "commit after extract"
    git remote rm origin
    git remote add origin https://<<<YOUR_REPO_TOKEN>>>@<<YOUR_REPO_URL>
    git push -u origin HEAD:master
  displayName: 'Push to Repo'

Summing Up

With this setup, if your project employs a DevOps strategy, you can easily create Pull Requests to your target branch. You don’t need to get into the pain of retrieving the metadata using ANT on your local, commit it on local, push to remote using Github for Desktop/TortoiseGit/SourceTree apps.

Always on Cloud. Retrieve and Deploy just by using a browser.

#Giveback #StaySafe

Spring’ 20 Feature – Upgrades to Change Set

Changesets are salesforce’s native way of transferring customization/configuration from one org to another. Change sets can contain only modifications you can make through the Setup menu- which are supported by the metadata. You can’t deploy data (For e, g: List of contacts). Change sets contain information about the org. They don’t contain data, such as records.

Faster availability of uploaded changeset

In salesforce the changeset that we create from the source sandboxes are called the outbound changeset and the target org that receives the changesets are called inbound changeset. During many times there have been issues with the upload and receive of the changeset as it takes considerable time to see the uploaded changeset in your target org. With the onset of Spring ’20 salesforce has optimized the way changesets are uploaded and received. Now, uploaded change sets are available for deployment sooner. Salesforce has not provided any metrics for this improvement. So, lets see how this make a difference. Please do comment if you have noticed improvements.

New Metadata for changeset

Following are the new components that are available from Spring ’20 release that can be included in the changeset. All these are also available to use with ANT based or sfdx mdapi commands.

Component NameMetadata API NameWildcard Support
Email ServiceEmailServicesFunctionNo
Lightning Community TemplateCommunityTemplateDefinitionYES
Lightning Community ThemeCommunityThemeDefinitionYES
Lightning Message ChannelLightningMessageChannel YES
Managed Content TypeManagedContentTypeYES
Whitelisted URL for RedirectsRedirectWhitelistUrlYES

Conclusion

As most of the project are moving away from changeset and with the reception of ANT and sfdx, let’s wait and see how often we see improvements to the changeset way of deployment.

Automate SFDC Data Export Using ADO

Data export has been a hot topic ever since the inception of salesforce and there are a lot of tools that help you to automate this task. There are tools available to automate the process as well. Probably these tools all generate either on a local drive or even might be cloud servers. How about the data extract that could be available on your repo! Yes, you heard it right. Its possible. It has been possible since long but then after Azure DevOps (ADO) pipelines popular in the market this has become much easier to implement. The same setup that I’ll be explaining could be modified a bit to run it from Docker or Jenkins as well. However, lets focus our discussion on setting up this task on ADO.

Process Flow

ado_process

Setup Dataloader

The dataloader comes with its Command Line part of it. Command Line dataloader is the way by which one could run the dataloader via the command line. This way it used a process-conf.xml file that holds the task details to be performed. Install the latest version of dataloader from your salesforce org and the zulu OpenJDK. Salesforce Dataloader uses this JDK library and the path variable must be set for this in your machine to run and test it locally. For the ADO setup, I’ll explain further down as how we could install this JDK when we run the job.

Encrypt your password using the encrypt.bat file as outlined in the official documentation. Also, setup the process-conf.xml file in the samples folder. In this example, I’ve used two beans (that’s how its is called in the command line dataloader), one for Account extract and another for Contact extract.

Create YML Script

Now its time to create the YML file. This file is for the ADO job to pickup and do the actions as we have mentioned in it. Create an empty yml file and add the below code and save it.

# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger: none
pool:
  vmImage: 'windows-latest'
  
steps:
- task: JavaToolInstaller@0
  inputs:
    versionSpec: '11'
    jdkArchitectureOption: 'x86'
    jdkSourceOption: 'LocalDirectory'
    jdkFile: 'build/setups/zulu13.29.9-ca-jdk13.0.2-win_x64.zip'
    jdkDestinationDirectory: '/builds/binaries/externals'
    cleanDestinationDirectory: true
- script: |
    mkdir extractFiles
    cd build/dataLoaderApp/bin
    echo ******Starting Customer Extract.....*******
    echo -----------------------------------
    echo Extracting Account...
    echo -----------------------------------
    call process.bat "D:/a/1/s/build/dataLoaderApp/samples/conf" "accountExtract"
    echo --------------------------------------------------------
    echo Account extraction completed successfully!
    echo --------------------------------------------------------    
  displayName: 'Account Extract'
- script: |
    cd build/dataLoaderApp/bin
    
    echo ------------------------
    echo Extracting Contact...
    echo ------------------------
    call process.bat "D:/a/1/s/build/dataLoaderApp/samples/conf" "contactExtract"
    echo ----------------------------------------------
    echo Contact  extraction completed successfully!
    echo ----------------------------------------------  
  displayName: 'Contact Extract'
- script: |
    echo ***All Extract Successfull!!!***
    echo ***Starting copying from VM to Repo****
    git config --local user.email "youremail@email.com
    git config --local user.name "Rohit"
    git config --local http.extraheader "AUTHORIZATION: bearer $(System.AccessToken)"
    git add extractFiles/\*.csv
    git commit -m "commit after extract"
    git remote rm origin
    git remote add origin <Repo URL>
    # Replace the username with password in the url in the format https://<password>@dev.azure.com/..../../.../
    git push -u origin HEAD:master
  displayName: 'Push to Repo'

Setup ADO Pipeline

Now its time to move on to git and setup the pipeline. Limiting to the scope of this blog, am not going into details of ADO and pipelines, lets focus on the dataloader automation part. ADO can work with any git repo and in this tutorial, we’ll use azure repo itself.

There is a free version of Azure that you could sign up for and in this tutorial, I’ll use my personal azure instance.

Get yours by visiting here. Choose Sign up, create an account. After that login to your azure and follow the below steps:

  1. Create a new repo.
  2. Initialize the repo with readme file
  3. Clone the repo to your local.
  4. Merge the below files/folder.
    1. YML file
    1. Dataloader folder
    1. Zulu OpenJDK zip.
  5. Commit the changes.
  6. Push to Remote.

Now you have the required files on your branch/repo and its time to create a pipeline job. Choose the pipeline account and click on pipeline.

ado_pipeline

Follow the below steps:

  • Choose New Pipeline.
  • Choose ‘Azure Repos Git’
ado_git
  • Select your repo.
ado_repo
  • Choose existing pipelines YAML file.
ado_pipeline
  • Enter YML file path
ado_loc
  • Choose Continue at the bottom
  • At his point you can preview the YML file. C
  • Choose Save.
  • Click on Run Pipeline to run the job.

You can see the job status on choosing the job. Once the job ran successfully, you can see the extracted files in the extractFiles folder on the repo.

ado_files

Conclusion

You saw how the files got extracted and was committed to the repo. An ADO job assigns an agent that you specify in the yml and runs the scripts/tasks on that vm environment. In this example we have used the vm image as windows. This is because command line dataloader works only on a windows environment. This job was manually run and for you to schedule it, for e.g., to run first of every month, you need to add triggers with a CRON expression. I will have this covered in the upcoming video.

schedules:
- cron: "0 10 1 * *"
  displayName: First of Month 10AM Build
  branches:
    include:
    - master

Cheers.

#GiveBack

Blog at WordPress.com.

Up ↑