برچسب: Tips

  • Clean code tips – names and functions | Code4IT


    I don’t have to tell you why you need to write clean code. Here you’ll see some tips about how to name things and how to structure functions.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    A few days ago I started (re)reading Clean Code by Robert Martin. It’s a fundamental book for programmers, so it worth reading it every once in a while.

    But this time I decided to share on Twitter some of the tips that I find interesting.

    If you are on Twitter, you can follow the retweets to this tweet, and join me in this reading.

    In this series of articles, I’ll sum up what I’ve learned reading chapter 2 – Meaningful Names, and 3 – Functions.

    Here’s the list (in progress)

    1. names and function arguments
    2. comments and formatting
    3. abstraction and objects
    4. error handling
    5. tests

    1: Use consistent names

    A good way to have a clean code is to use consistent names through the whole codebase. Just imagine what happens if you use different names to indicate the same concept.

    Imagine that you need to retrieve data from many sources. You could define these methods

    getArticles()
    fetchUsers()
    retrievePages()
    

    and these other ones

    getArticles()
    getUsers()
    getPages()
    

    See the difference?

    At the same time, there’s a big difference between

    getArticleById(string id)
    fetchSingleUser(string id)
    useIdToRetrievePage(string id)
    

    and

    getArticleById(string id)
    getUserById(string id)
    getPageById(string id)
    

    don’t you think?

    2: Keep simple, small functions with meaningful names

    Remember that your code must be clean enough to be easily readable without spending too much time trying to guess what a function does. Smaller functions are easier to read and to understand.

    Take this function:

    string printOnlyOddNumbers(int[] numbers)
    {
    	List<int> n = new List<int>() { };
    	List<int> rev = new List<int>() { };
    	foreach (var number in numbers)
    	{
    		if (number % 2 == 1)
    		{
    			n.Add(number);
    		}
    	}
    
    	for (int i = n.Count - 1; i >= 0; i--)
    	{
    		rev.Add(n.ElementAt(i));
    	}
    
    	StringBuilder sb = new StringBuilder();
    	for (int i = 0; i < rev.Count; i++)
    	{
    		sb.Append(rev.ElementAt(i));
    	}
    	return sb.ToString();
    }
    

    What can you see?

    1. the variables have meaningless names (what do n, rev and sb mean?)
    2. it does multiple things: it filters the numbers, reverses the list and saves them into a string
    3. the function name lies: it does not print the numbers, it stores them into a string.
    4. it has many levels of indentation: an IF within a FOR within a function.

    Isn’t it better if we could split it into multiple, simpler functions with better names?

    string storeOddNumbersInAReversedString(int[] numbers)
    {
    	List<int> oddNumbers = new List<int>() { };
    	List<int> reversedNumbers = new List<int>() { };
    	oddNumbers = getOnlyOddNumbers(numbers);
    	reversedNumbers = reverseNumbers(oddNumbers);
    
    	return storeNumbersInString(reversedNumbers);
    }
    
    List<int> getOnlyOddNumbers(int[] numbers)
    {
    	return numbers.Where(n => n % 2 == 1).ToList();
    }
    
    List<int> reverseNumbers(List<int> numbers)
    {
    	 numbers.Reverse();
    	 return numbers;
    }
    
    string storeNumbersInString(List<int> numbers)
    {
    	StringBuilder sb = new StringBuilder();
    	for (int i = 0; i < numbers.Count; i++)
    	{
    		sb.Append(numbers.ElementAt(i));
    	}
    	return sb.ToString();
    }
    

    Still not perfect, but it’s better than the original function.
    Have a look at the reverseNumbers function. It will cause trouble, and we’ll see why soon.

    Also, notice how I changed the name of the main function: storeOddNumbersInAReversedString is longer than printOnlyOddNumbers, but it helps to understand what’s going on.

    3: Keep a coherent abstraction level

    Don’t mix different abstraction levels in the same function:

    string storeOddNumbersInAReversedString_WithStringBuilder(int[] numbers)
    {
    	List<int> oddNumbers = new List<int>() { };
    	List<int> reversedNumbers = new List<int>() { };
    	oddNumbers = getOnlyOddNumbers(numbers);
    	reversedNumbers = reverseNumbers(oddNumbers);
    
    	StringBuilder sb = new StringBuilder();
    	for (int i = 0; i < reversedNumbers.Count; i++)
    	{
    		sb.Append(numbers.ElementAt(i));
    	}
    	return sb.ToString();
    }
    

    Here in the same function I have two high-level functions (getOnlyOddNumbers and reverseNumbers) and some low-level concepts (the for loop and the .Append usage on a StringBuilder).

    It can cause confusion on the readers because they won’t know what are important details and what are abstract operations. You’ve already seen how to solve this issue.

    4: Prefer polymorphism over switch statements

    You have this Ticket class:

    public enum TicketType
    {
    	Normal,
    	Premium,
    	Family,
    	Free
    }
    
    public class Ticket
    {
    	public DateTime ExpirationDate { get; set; }
    	public TicketType TicketType { get; set; }
    }
    

    And a Cart class that, given a Ticket and a quantity, calculates the price for the purchase:

    public class Cart
    {
    	public int CalculatePrice(Ticket ticket, int quantity)
    	{
    		int singlePrice;
    		switch (ticket.TicketType)
    		{
    			case TicketType.Premium: singlePrice = 7; break;
    			case TicketType.Family: singlePrice = 4; break;
    			case TicketType.Free: singlePrice = 0; break;
    			default: singlePrice = 5; break;
    		}
    		return singlePrice * quantity;
    	}
    }
    

    Needless to say, this snippet sucks! It has static values for the single prices of the tickets based on the ticket type.

    The ideal solution is to remove (almost) all the switch statements using polymorphism: every subclass manages its own information and the client doesn’t have to repeat the same switch over and over.

    First of all, create a subclass for every type of ticket:

    public abstract class Ticket
    {
    	public DateTime ExpirationDate { get; set; }
    	public abstract int SinglePrice { get; }
    }
    
    public class NormalTicket : Ticket
    {
    	public override int SinglePrice { get => 5; }
    }
    
    public class PremiumTicket : Ticket
    {
    	public override int SinglePrice { get => 7; }
    }
    
    public class FamilyTicket : Ticket
    {
    	public override int SinglePrice { get => 4; }
    }
    
    public class FreeTicket : Ticket
    {
    	public override int SinglePrice { get => 0; }
    }
    

    and simplify the CalculatePrice function:

    public int CalculatePrice(Ticket ticket, int quantity)
    {
        return ticket.SinglePrice * quantity;
    }
    

    No more useless code! And now if you need to add a new type of ticket you don’t have to care about adding the case branch in every switch statement, but you only need a new subclass.

    I said that you should almost remove every switch statement. That’s because you need to create those objects somewhere, right?

    public class TicketFactory
    {
    	public Ticket CreateTicket(TicketType type, DateTime expirationDate)
    	{
    		Ticket ticket = null;
    		switch (type)
    		{
    			case TicketType.Family:
    				ticket = new FamilyTicket() { ExpirationDate = expirationDate };
    				break;
    			case TicketType.Free:
    				ticket = new FreeTicket() { ExpirationDate = expirationDate };
    				break;
    			case TicketType.Premium:
    				ticket = new PremiumTicket() { ExpirationDate = expirationDate };
    				break;
    			default:
    				ticket = new FamilyTicket() { ExpirationDate = expirationDate };
    				break;
    
    		}
    		return ticket;
    	}
    }
    

    THIS is where you must use the TicketType enum and add new subclasses of the Ticket class!

    PSST: wanna know some cool things about Enums? Here’s something for you!

    5: avoid side effects

    You often hear (correctly) that

    A function must do one thing and to it well

    This means that you also need to take care of side effects: avoid changing the state of the system or of the input parameters.

    Do you remember the reverseNumbers function from the example above?

    List<int> reverseNumbers(List<int> numbers)
    {
    	 numbers.Reverse();
    	 return numbers;
    }
    

    It does a terrible, terrible thing: it reverses the input parameter!

    List<int> numbers = new List<int> {1,2,3};
    
    var reversedNumbers = reverseNumbers(numbers);
    // numbers = 3,2,1
    // reversedNumbers = 3,2,1
    

    So now the state of the input parameter has changed without notifying anyone. Just avoid it!

    6: Fewer arguments, better readability

    Keep the number of function arguments as small as possible. Ideally you should have 0 or 1 arguments, but even 2 or 3 are fine. If more… well, you have to think about how to refactor your code!

    What are the best cases for using one argument?

    • Check a property on that input (eg: isOdd(int number))
    • Transform the input variable (eg: ToString(int number))

    Sometimes you just cannot use a single parameter, for example for coordinates. But you can group this information in an object and work on it. It’s not cheating if there’s a logic behind this grouping!

    7: Prefer exceptions over error codes

    Remember what I said about polymorphism and enums? The same applies to exceptions.

    enum StatusCode {
    	OK,
    	NotFound,
    	NotAuthorized,
    	GenericError
    }
    
    StatusCode DoSomething(int variable){
    	// do something
    	return StatusCode.GenericError;
    }
    

    So for every user you should explicitly check for the returned status, adding more useless code.

    Also, consider that if the DoSomething method also returns a value, you must return a tuple or a complex object to represent both the status and the value.

    class Result
    {
    	public int? Value { get; set; }
    	public StatusCode StatusCode { get; set; }
    }
    
    Result GetHalf(int number)
    {
    	if (number % 2 == 0) return new Result
    	{
    		Value = number / 2,
    		StatusCode = StatusCode.OK
    	}
    	else
    	{
    		return new Result
    		{
    			Value = null,
    			StatusCode = StatusCode.GenericError
    		}
    	}
    }
    

    Conclusion

    This is a recap of chapters 2 and 3 of Clean Code. We’ve seen how to write readable code, with small functions that are easy to test and, even better, easy to understand.

    As you’ve seen, I haven’t shown perfect code, but I focused on small improvements: reaching clean code is a long path, and you must approach it one step at a time.

    As soon as I read the other chapters I’ll post some new tips.

    Happy coding!





    Source link

  • Clean code tips – comments and formatting | Code4IT


    Are all comments bad? When they are necessary? Why formatting is so important? Writing clean code does not only refer to the executed code, but also to everything around.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    This is the second part of my series of tips about clean code. We’ll talk about comments, why many of them are useless or even dangerous, why some are necessary and how to improve your comments. We’ll also have a look at why formatting is so important, and we can’t afford to write messy code.

    Here’s the list (in progress)

    1. names and function arguments
    2. comments and formatting
    3. abstraction and objects
    4. error handling
    5. tests

    Often you see comments that explain what a method or a class does.

    /// <summary>
    /// Returns the max number of an array
    /// </summary>
    /// <param name="numbers">array of numbers</param>
    /// <returns>Max number in the array</returns>
    public int GetMaxNumber(int[] numbers)
    {
        // return max;
        return numbers.Max();
    }
    

    What’s the point of this comment? Nothing: it doesn’t add more info about the method meaning. Even worse, it clutters the codebase and makes the overall method harder to read.

    Luckily sometimes comments are helpful; rare cases, but they exist.

    Yes, sometimes comments are useful. Or even necessary. Let’s see when.

    Show intention and meaning

    Sometimes the external library you’re using is not well documented, or you are writing an algorithm that needs some explanations. Put a comment to explain what you are doing and why.

    Another example is when you are using regular expressions: the meaning can be really hard to grasp, so using a comment to explain what you are doing is the best thing to do:

    public bool CheckIfStringIsValid(string password)
    {
        // 2 to 7 lowercase chars followed by 3 or 4 numbers
        // Valid:   kejix173
        //          aoe193
        // Invalid: a92881
        Regex regex = new Regex(@"[a-z]{2,7}[1-9]{3,4}");
        var hasMatch = regex.IsMatch(password);
        return hasMatch;
    }
    

    Reason for default values

    Some methods have default values, and you’d better explain why you chose that value:

    public string FindPerfectAnimal(List<Criterion> criteria)
    {
        string perfectAnimal = ElaborateCriteria(criteria);
        // We use "no-preferences" so we can easily perform queries on the DB and show it on the UI
        return string.IsNullOrEmpty(perfectAnimal) ? "no-preferences" : perfectAnimal;
    }
    

    What if you didn’t add that comment? Maybe someone will be tempted to change that value, thus breaking both the UI and the DB.

    TODO marks

    Yes, you can add TODO comments in your code. Just don’t use it as an excuse for not fixing bugs, refactor your code and rename functions and variables with better names.

    public void Register(string username, string password)
    {
        // TODO: add validation on password strenght
        dbRepository.RegisterUser(username, password);
    }
    

    Some IDEs have a TODO window that recognizes those comments: so, yes, it’s a common practice!

    Highlight the importance of some code

    Some method calls seem redundant but they actually make the difference. A good practice is to highlight whose parts and explain why they are so important.

    public string GetImagePath(string resourceId)
    {
        var item = dbRepository.GetItem(resourceId);
    
        // The source returns image paths with trailing whitespaces. We must remove them.
        return item.ImagePath.Trim();
    }
    

    Most of the time comments should be avoided. They can lead confusion to the developer, not be updated to the latest version of the code or they just make the code harder to read. Let’s see some of the bad uses of comments.

    They explain what the code does

    If your code is hard to read, why spending time in writing comments to explain what the code does instead of writing better code, with better names and easier to read functions?

    They don’t add anything important not already written in the code

    // sum two numbers and return the result
    public int Add(int a, int b)
    {
    	// calculate the sum and return it to the caller
    	return a + b;
    }
    

    What’s the meaning of these comments? Absolutely nothing. They just add lines of code to be read.

    They lie

    It may happen that you write your comments with the best intentions, but you don’t choose the best words for your comments, and they may involuntarily lie.

    Have a look at this snippet.

    // counts how many odd numbers are in the list
    public int CountOddsNumbers(IEnumerable<int> values)
    {
    	return values.Where(v => v % 2 == 1).Count();
    }
    

    Where are the lies? First of all, the numbers are not in the list, but in an IEnumerable. Yes, a List is an IEnumerable, but here that word can be misinterpreted. Second, what happens if the input value is null? Does this method return null, zero, or does it throw an exception? You have to check the internal code to see what’s really going on.

    They are not updated

    Maybe you’ve written the perfect comment that explains what your API does.

    But suddenly, someone adds a cache layer in your code, and he or she doesn’t update the documentation.

    So you’ll end up with wrong comments that are simply outdated.

    They indicate the end of a block

    What do you think of this snippet?

    public int CountPalindromes(IEnumerable<string> values)
    {
    	int count = 0;
    	foreach (var element in values)
    	{
    		if (!string.IsNullOrWhiteSpace(element))
    		{
    			var sb = new StringBuilder();
    			var reversedChars = element.Reverse();
    			foreach (var ch in reversedChars)
    			{
    				sb.Append(ch);
    			}
    
    			if (element.Equals(sb.ToString(), StringComparison.CurrentCultureIgnoreCase))
    				count++;
    
    		} // end if
    	} // end foreach
    	return count;
    } // end CountPalindromes
    

    If the code is complex enough to require end CountPalindromes, end foreach and end if, isn’t it better to refactor the code and use shorter methods?

    public int CountPalindromes(IEnumerable<string> values)
    {
    	return values
    	.Where(v => !string.IsNullOrWhiteSpace(v))
    	.Where(v => v.Equals(ReverseString(v), StringComparison.CurrentCultureIgnoreCase)).Count();
    }
    
    public string ReverseString(string originalString)
    {
    	return new string(originalString.Reverse().ToArray());
    }
    

    Better, isn’t it?

    Both bad and good

    There are comments that are both good and bad, it depends on how you structure them.

    Take for example documentation for APIs.

    /// <summary>
    /// Returns a page of items
    /// </summary>
    /// <param name="pageNumber">Page Number</param>
    /// <param name="pageSize">Page size</param>
    /// <returns>A list of items</returns>
    [HttpGet]
    public async Task<List<Item>> GetPage(int pageNumber, int pageSize)
    {
    	// do something
    }
    

    Useless comment, isn’t it? It doesn’t add anything that you could have guessed by looking at the parameters and the function name.

    Can we use every value for pageNumber and for pageSize? What happens if there are no items to be returned? Does it return a particular status code or does it return an empty list?

    /// <summary>
    /// Returns a page of items
    /// </summary>
    /// <param name="pageNumber">Number of the page to be fetched. This index is 0-based. It must be greater or equal than zero.</param>
    /// <param name="pageSize">Maximum number of items to be retrieved. It must be greater or equal than zero.</param>
    /// <returns>A list of up to <paramref name="pageSize"/> items. Empty result if no more items are available</returns>
    [HttpGet]
    [Route("getpage")]
    [ProducesResponseType(200)]
    [ProducesResponseType(400)]
    [ProducesResponseType(500)]
    public async Task<List<Item>> GetPage(int pageNumber, int pageSize)
    {
    	if (pageNumber < 0)
    		throw new ArgumentException($"{nameof(pageNumber)} cannot be less than zero");
    	if (pageSize < 0)
    		throw new ArgumentException($"{nameof(pageSize)} cannot be less than zero");
    
    	// do something
    }
    

    Now all these questions are addressed. Still not perfect, though. But you get the idea.

    Why spending time in code formatting?

    Why bother writing well-formatted code? Do I really need to spend time in formatting? Who cares! All the code gets transformed in bits, so why care about tabs and spacing, line length and so on!

    Right?

    No.

    Here’s a great quote from that book:

    The functionality you create today has a good chance of changing in the next release, but the readability of your code will have a profound effect on all the changes that will ever be made.

    How to structure classes

    Think of a class as if it was a news article. Would you prefer all the info mixed up or have a clear, structured content?
    So a good idea is to have all the general info on the top of the files, and order the functions in a way that the more you scroll down in the file, the more you get into the details of what’s going on.

    This will help the readers understanding what the class does in a general way by just having a look at the top of the class. If they are interested they can just scroll down and read the details.

    So a good way to structure your code can be

    1. public properties
    2. constructor
    3. public functions
    4. private functions

    Some programmer prefer other structures, like

    1. public functions
    2. private functions
    3. constructor
    4. public properties

    For me the second option is odd. But it’s not wrong. Whichever you prefer, remember to be consistent across your codebase.

    Conclusion

    We’ve seen some aspects that are considered secondary: coding and formatting. They are part of the codebase, and you should take care of them.

    In general, when you’re writing code and comments, stop for a second and think “is this part readable? Is it meaningful? Can I improve it?”.

    Don’t forget that you’re doing it not only for others but even for your future self.

    So, for now…

    Happy coding!





    Source link

  • Clean code tips – Abstraction and objects | Code4IT


    Are Getters and Setters the correct way to think of abstraction? What are pro and cons of OOP and Procedural programming? And, in the OOP world, how can you define objects?

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    This is the third part of my series of tips about clean code.

    Here’s the list (in progress)

    1. names and function arguments
    2. comments and formatting
    3. abstraction and objects
    4. error handling
    5. tests

    In this article, I’m going to explain how to define classes in order to make your code extensible, more readable and easier to understand. In particular, I’m going to explain how to use effectively Abstraction, what’s the difference between pure OOP and Procedural programming, and how the Law of Demeter can help you structure your code.

    The real meaning of abstraction

    Some people think that abstraction is nothing but adding Getters and Setters to class properties, in order to (if necessary) manipulate the data before setting or retrieving it:

    interface IMixer_A
    {
    	void SetVolume(int value);
    	int GetVolume();
    	int GetMaxVolume();
    }
    
    class Mixer_A : IMixer_A
    {
    	private const int MAX_VOLUME = 100;
    	private int _volume = 0;
    
    	void SetVolume(int value) { _volume = value; }
    	int GetVolume() { return _volume; }
    	int GetMaxVolume() { return MAX_VOLUME; }
    }
    

    This way of structuring the class does not hide the implementation details, because any client that interacts with the Mixer knows that internally it works with integer values. A client should only know about the operations that can be performed on a Mixer.

    Let’s see a better definition for an IMixer interface:

    interface IMixer_B
    {
    	void IncreaseVolume();
    	void DecreaseVolume();
    	void Mute();
    	void SetToMaxVolume();
    }
    
    class Mixer_B : IMixer_B
    {
    	private const int MAX_VOLUME = 100;
    	private int _volume = 0;
    
    	void IncreaseVolume()
    	{
    		if (_volume < MAX_VOLUME) _volume++;
    	}
    	void DecreaseVolume()
    	{
    		if (_volume > 0) _volume--;
    	}
    
    	void Mute() { _volume = 0; }
    
    	void SetToMaxVolume()
    	{
    		_volume = MAX_VOLUME;
    	}
    }
    

    With this version, we can perform all the available operations without knowing the internal details of the Mixer. Some advantages?

    • We can change the internal type for the _volume field, and store it as a ushort or a float, and change the other methods accordingly. And no one else will know it!
    • We can add more methods, for instance a SetVolumeToPercentage(float percentage) without the risk of affecting the exposed methods
    • We can perform additional checks and validation before performing the internal operations

    It can help you of thinking classes as if they were real objects you can interact: if you have a stereo you won’t set manually the volume inside its circuit, but you’ll press a button that increases the volume and performs all the operations for you. At the same time, the volume value you see on the display is a “human” representation of the internal state, not the real value.

    Procedural vs OOP

    Object-oriented programming works the best if you expose behaviors so that any client won’t have to access any internal properties.

    Have a look at this statement from Wikipedia:

    The focus of procedural programming is to break down a programming task into a collection of variables, data structures, and subroutines, whereas in object-oriented programming it is to break down a programming task into objects that expose behavior (methods) and data (members or attributes) using interfaces. The most important distinction is that while procedural programming uses procedures to operate on data structures, object-oriented programming bundles the two together, so an “object”, which is an instance of a class, operates on its “own” data structure.

    To see the difference between OO and Procedural programming, let’s write the same functionality in two different ways. In this simple program, I’m going to generate the <a> tag for content coming from different sources: Twitter and YouTube.

    Procedural programming

    public class IContent
    {
    	public string Url { get; set; }
    }
    
    class Tweet : IContent
    {
    	public string Author { get; set; }
    }
    
    class YouTubeVideo : IContent
    {
    	public int ChannelName { get; set; }
    }
    

    Nice and easy: the classes don’t expose any behavior, but only their properties. So, a client class (I’ll call it LinkCreator) will use their properties to generate the HTML tag.

    public static class LinkCreator
    {
    	public static string CreateAnchorTag(IContent content)
    	{
    		switch (content)
    		{
    			case Tweet tweet: return $"<a href=\"{tweet.Url}\"> A post by {tweet.Author}</a>";
    			case YouTubeVideo yt: return $"<a href=\"{yt.Url}\"> A video by {yt.ChannelName}</a>";
    			default: return "";
    		}
    	}
    }
    

    We can notice that the Tweet and YouTubeVideo classes are really minimal, so they’re easy to read.
    But there are some downsides:

    • By only looking at the IContent classes, we don’t know what kind of operations the client can perform on them.
    • If we add a new class that inherits from IContent we must implement the operations that are already in place in every client. If we forget about it, the CreateAnchorTag method will return an empty string.
    • If we change the type of URL (it becomes a relative URL or an object of type System.Uri) we must update all the methods that reference that field to propagate the change.

    Object-oriented programming

    In Object-oriented programming, we declare the functionalities to expose and we implement them directly within the class:

    public interface IContent
    {
    	string CreateAnchorTag();
    }
    
    public class Tweet : IContent
    {
    	public string Url { get; }
    	public string Author { get; }
    
    	public string CreateAnchorTag()
    	{
    		return $"<a href=\"{Url}\"> A post by {Author}</a>";
    	}
    }
    
    public class YouTubeVideo : IContent
    {
    	public string Url { get; }
    	public int ChannelName { get; }
    
    	public string CreateAnchorTag()
    	{
    		return $"<a href=\"{Url}\"> A video by {ChannelName}</a>";
    	}
    
    }
    

    We can see that the classes are more voluminous, but just by looking at a single class, we can see what functionalities they expose and how.

    So, the LinkCreator class will be simplified, since it hasn’t to worry about the implementations:

    public static class LinkCreator
    {
    	public static string CreateAnchorTag(IContent content)
    	{
    		return content.CreateAnchorTag();
    	}
    }
    

    But even here there are some downsides:

    • If we add a new IContent type, we must implement every method explicitly (or, at least, leave a dummy implementation)
    • If we expose a new method on IContent, we must implement it in every subclass, even when it’s not required (should I care about the total video duration for a Twitter channel? Of course no).
    • It’s harder to create easy-to-maintain classes hierarchies

    So what?

    Luckily we don’t live in a world in black and white, but there are other shades: it’s highly unlikely that you’ll use pure OO programming or pure procedural programming.

    So, don’t stick too much to the theory, but use whatever fits best to your project and yourself.

    Understand Pro and Cons of each type, and apply them wherever is needed.

    Objects vs Data structure – according to Uncle Bob

    There’s a statement by the author that is the starting point of all his following considerations:

    Objects hide their data behind abstractions and expose functions that operate on that data. Data structure expose their data and have no meaningful functions.

    Personally, I disagree with him. For me it’s the opposite: think of a linked list.

    A linked list is a data structure consisting of a collection of nodes linked together to form a sequence. You can perform some operations, such as insertBefore, insertAfter, removeBefore and so on. But they expose only the operations, not the internal: you won’t know if internally it is built with an array, a list, or some other structures.

    interface ILinkedList
    {
    	Node[] GetList();
    	void InsertBefore(Node node);
    	void InsertAfter(Node node);
    	void DeleteBefore(Node node);
    	void DeleteAfter(Node node);
    }
    

    On the contrary, a simple class used just as DTO or as View Model creates objects, not data structures.

    class Person
    {
    	public String FirstName { get; set; }
    	public String LastName { get; set; }
    	public DateTime BirthDate { get; set; }
    }
    

    Regardless of the names, it’s important to know when one type is preferred instead of the other. Ideally, you should not allow the same class to expose both properties and methods, like this one:

    class Person
    {
    	public String FirstName { get; set; }
    	public String LastName { get; set; }
    	public DateTime BirthDate { get; set; }
    
    	public string CalculateSlug()
    	{
    		return FirstName.ToLower() + "-" + LastName.ToLower() + "-" + BirthDate.ToString("yyyyMMdd");
    	}
    }
    

    An idea to avoid this kind of hybrid is to have a different class which manipulates the Person class:

    static class PersonAttributesManager
    {
    	static string CalculateSlug(Person p)
    	{
    		return p.FirstName.ToLower() + "-" + p.LastName.ToLower() + "-" + p.BirthDate.ToString("yyyyMMdd");
    	}
    }
    

    In this way, we decouple the properties of a pure Person and the possible properties that a specific client may need from that class.

    The Law of Demeter

    The Law of Demeter is a programming law that says that a module should only talk to its friends, not to strangers. What does it mean?

    Say that you have a MyClass class that contains a MyFunction class, which can accept some arguments. The Law of Demeter says that MyFunction should only call the methods of

    1. MyClass itself
    2. a thing created within MyFunction
    3. every thing passed as a parameter to MyFunction
    4. every thing stored within the current instance of MyClass

    This is strictly related to the fact that things (objects or data structures – it depends if you agree with the Author’s definitions or not) should not expose their internals, but only the operations on them.

    Here’s an example of what not to do:

    class LinkedListClient{
    	ILinkedList linkedList;
    
    	public void AddTopic(Node nd){
    		// do something
    		linkedList.NodesList.Next = nd;
    		// do something else
    	}
    }
    

    What happens if the implementation changes or you find a bug on it? You should update everything. Also, you are coupling too much the two classes.

    A problem with this rule is that you should not refer the most common operations on base types too:

    class LinkedListClient{
    	ILinkedList linkedList;
    
    	public int GetCount(){
    		return linkedList.GetTopicsList().Count();
    	}
    }
    

    Here, the GetCount method is against the Law of Demeter, because it is performing operations on the array type returned by GetList. To solve this problem, you have to add the GetCount() method to the ILinkedList class and call this method on the client.

    When it’s a single method, it’s acceptable. What about operations on strings or dates?

    Take the Person class. If we exposed the BirthDate properties as a method (something like GetBirthDate), we could do something like

    class PersonExample{
    	void DoSomething(Person person){
    		var a = person.GetBirthDate().ToString("yyyy-MM-dd");
    		var b = person.GetBirthDate().AddDays(52);
    	}
    }
    

    which is perfectly reasonable. But it violates the law of Demeter: you can’t perform ToString and AddDays here, because you’re not using only methods exposed by the Person class, but also those exposed by DateTime.

    A solution could be to add new methods to the Person class to handle these operations; of course, it would make the class bigger and less readable.

    Therefore, I think that this law of Demeter is a good rule of thumb, but you should consider it only as a suggestion and not as a strict rule.

    If you want to read more, you can refer to this article by Carlos Caballero or to this one by Robert Brautigam.

    Wrapping up

    We’ve seen that it’s not so easy to define which behaviors a class should expose. Do we need pure data or objects with a behavior? And how can abstraction help us hiding the internals of a class?

    Also, we’ve seen that it’s perfectly fine to not stick to OOP principles strictly, because that’s a way of programming that can’t always be applied to our projects and to our processes.

    Happy coding!





    Source link

  • Clean code tips – Error handling &vert; Code4IT

    Clean code tips – Error handling | Code4IT


    The way you handle errors on your code can have a huge impact on the maintainability of your projects. Don’t underestimate the power of clean error handling.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    We all know that nothing goes perfectly smoothly: network errors, invalid formats, null references… We can have a long, long list of what can go wrong in our applications. So, it is important to handle errors with the same care we have for all the rest of our code.

    This is the fourth part of this series about clean code, which is a recap of the things I learned from Uncle Bob’s “Clean Code”. If you want to read more, here’s the other articles I wrote:

    1. names and function arguments
    2. comments and formatting
    3. abstraction and objects
    4. error handling
    5. tests

    Status codes or exceptions?

    In Uncle Bob’s opinion, we should always prefer exceptions over status codes when returning values.

    Generally speaking, I agree. But let’s discuss a little about the differences.

    First of all, below you can see a method that, when downloading a string, returns both the status code and the real content of the operation.

    void Main()
    {
        (HttpStatus status, string content) = DownloadContent("https://code4it.dev");
        if (status == HttpStatus.Ok)
        {
            // do something with the content
        }
        else if (status == HttpStatus.NotFound)
        {
            // do something else
        }
        // and so on
    }
    
    public (HttpStatus, string) DownloadContent(string url)
    {
        // do something
    }
    
    // Define other methods and classes here
    
    public enum HttpStatus
    {
        Ok,
        NotFound,
        Unauthorized,
        GenericError
    }
    

    When you use status codes, you have to manually check the result of the operation with a switch or an if-else. So, if the caller method forgets to check whether the operation was successful, you might incur in unexpected execution paths.

    Now, let’s transform the code and use exceptions instead of status codes:

    void Main()
    {
        try
        {
            string content = DownloadContent("https://code4it.dev");
        }
        catch (NotFoundException nfe) {/*do something*/}
        catch (UnauthorizedException ue) {/*do something*/}
        catch (Exception e) {/*do something else*/}
        }
    }
    
    public string DownloadContent(string url)
    {
        // do something
        return "something";
        // OR throw NotFoundException
        // OR throw UnauthorizedException
        // OR do something else
    }
    

    As you can see, the code is clearly easier to read: the “main” execution is defined within the try block.

    What are the pros and cons of using exceptions over status codes?

    • PRO: the “happy path” is easier to read
    • PRO: every time you forget to manage all the other cases, you will see a meaningful exception instead of ending up with a messy execution without a clue of what went wrong
    • PRO: the execution and the error handling parts are strongly separated, so you can easily separate the two concerns
    • CON: you are defining the execution path using exceptions instead of status (which is bad…)
    • CON: every time you add a try-catch block, you are adding overhead on the code execution.

    The reverse is obviously valid for status codes.

    So, what to do? Well, exceptions should be used in exceptional cases, so if you are expecting a range of possible status that can be all managed in a reasonable way, go for enums. If you are expecting an “unexpected” path that you cannot manage directly, go for exceptions!

    If you really need to use status code, use enums instead of strings or plain numbers.

    TDD can help you handling errors

    Don’t forget that error handling must be thoroughly tested. One of the best ways is to write your tests first: this will help you figuring out what kind of exceptions, if any, your method should throw, and which ones it should manage.

    Once you have written some tests for error handling, add a try-catch block and start thinking to the actual business logic: you now can be sure that you’re covering errors with your tests.

    Wrap external dependencies to manage their exceptions

    Say that you use a third-party library as core part of your application.

    public class ExternalDependency
    {
        public string DownloadValue(string resourcePath){
            // do something
        }
    }
    

    and that this method throws some custom exceptions, like ResourceNotFoundException, InvalidCredentialsExceptions and so on.

    In the client code you might want to handle errors coming from that external dependency in a specific manner, while the general error handling has a different behavior.

    void Main()
    {
        ExternalDependency service = CreateExternalService();
    
        try
        {
            var value = GetValueToBeDowloaded();
            service.DownloadValue(value);
        }
        catch (ResourceNotFoundException rnfex)
        {
            logger.Log("Unable to get resource");
            ManageDownloadFailure();
        }
        catch (InvalidCredentialsExceptions icex)
        {
            logger.Log("Unable to get resource");
            ManageDownloadFailure();
        }
        catch (Exception ex)
        {
            logger.Log("Unable to complete the operation");
            DoSomethingElse();
        }
    }
    

    This seems reasonable, but what does it imply? First of all, we are repeating the same error handling in multiple catch blocks. Here I have only 2 custom exceptions, but think of complex libraries that can throw tens of exceptions. Also, what if the library adds a new Exception? In this case, you should update every client that calls the DownloadValue method.

    Also, the caller is not actually interested on the type of exception thrown by the external library; it cares only of the status of the operations, not the reason of a potential failure.

    So, in this case, the best thing to do is to wrap this external class into a custom one. In this way we can define our Exception types, enrich them with all the properties we need, and catch only them; all of this while being sure that even if the external library changes, our code won’t be affected.

    So, here’s an example of how we can wrap the ExternalDependency class:

    public class MyDownloader
    {
        public string DownloadValue(string resourcePath)
        {
            var service = new ExternalDependency();
    
            try
            {
                return service.DownloadValue(resourcePath);
            }
            catch (Exception ex)
            {
                throw new ResourceFileDownloadException(ex, resourcePath);
            }
        }
    }
    

    Now that all our clients use the MyDownloader class, the only type of exception to manage is ResourceFileDownloadException. Notice how I enriched this exception with the name of the resource that the service wasn’t able to download.

    Another good reason to wrap external libraries? What if they become obsolete, or you just need to use something else because it fits better with the use case you need?

    Define exception types thinking of the clients

    Why haven’t I exposed multiple exceptions, but I chose to throw only a ResourceFileDownloadException? Because you should define your exceptions thinking of how they can be helpful to their caller classes.

    I could have thrown other custom exceptions that mimic the ones exposed by the library, but they would have not brought value to the overall system. In fact, the caller does not care that MyDownloader failed because the resource does not exist, but it cares only that an error occurred when downloading a resource. It doesn’t even care that that exception was thrown by MyDownloader!

    So, when planning your exceptions, think of how they can be used by their clients rather than where they are thrown.

    Fighting the devil: null reference

    Everyone fights with null values. If you refence a null value, you will break the whole program with some ugly messages, like cannot read property of … in JavaScript, or with a NullReferenceException in C#.

    So, the best thing to do to avoid this kind of error is, obviously, to reduce the amount of possible null values in our code.

    We can deal with it in two ways: avoid returning null from a function and avoid passing null values to functions!

    How to avoid returning null values

    Unless you don’t have specific reasons to return null, so when that value is acceptable in your domain, try not to return null.

    For string values, you can simply return empty strings, if it is considered an acceptable value.

    For lists of values, you should return an empty list.

    IEnumerable<char> GetOddChars(string value)
    {
        if (value.Length > 0)
        {
            // return something
        }
        else
        {
            return Enumerable.Empty<char>();
            // OR return new List<char>();
        }
    }
    

    In this way you can write something like this:

        var chars = GetOddChars("hello!");
        Console.WriteLine(chars.Count());
    
        foreach (char c in chars)
        {
            // Do Something
        }
    

    Without a single check on null values.

    What about objects? There are many approaches that you can take, like using the Null Object pattern
    which allows you to create an instance of an abstract class which does nothing at all, so that your code won’t care if the operations it does are performed on an actual object or on a Null Object.

    How to avoid passing null values to functions

    Well, since we’ve already avoided nulls from return values, we may expect that we will never pass them to our functions. Unfortunately, that’s not true: what about you were using external libraries to get some values and then use them on your functions?

    Of course, it’s better to check for null values before calling the function, and not inside the function itself; in this way, the meaning of the function is clearer and the code is more concise.

    public float CalculatePension(Person person, Contract contract, List<Benefit> benefits)
    {
        if (person != null)
        {
            // do something with the person instance
            if(contract != null && benefits != null)
            {
                // do something with the contract instance
                if(benefits != null)
                {
                    // do something
                }
            }
        }
        // what else?
    }
    

    … and now see what happens when you repeat those checks for every method you write.

    As we say, prevention is better than the cure!

    Progressive refinements

    It’s time to apply those tips in a real(ish) scenario. Let’s write a method that read data from the file system, parses its content, and sends it to a remote endpoint.

    Initial implementation

    First step: read a stream from file system:

    (bool, Stream) ReadDataFromFile(string filePath)
    {
        if (string.IsNullOrWhiteSpace(filePath))
        {
            Stream stream = ReadFromFileSystem(filePath);
    
            if (stream != null && stream.Length > 0)
                return (true, stream);
        }
    
        return (false, null);
    }
    

    This method returns a tuple with info about the existence of the file and the stream itself.

    Next, we need to convert that stream into plain text:

    string ConvertStreamIntoString(Stream fileStream)
    {
        return fileStream.ConvertToString();
    }
    

    Nothing fancy. Ah, ConvertToString does not really exist in the .NET world, but let’s fake it!

    Third step, we need to send the string to the remote endpoint.

    OperationResult SendStringToApi(string fileContent)
    {
        using (var httpClient = new HttpClient())
        {
            httpClient.BaseAddress = new Uri("http://some-address");
    
            HttpRequestMessage message = new HttpRequestMessage();
            message.Method = HttpMethod.Post;
            message.Content = ConvertToContent(fileContent);
    
            var httpResult = httpClient.SendAsync(message).Result;
    
            if (httpResult.IsSuccessStatusCode)
                return OperationResult.Ok;
            else if (httpResult.StatusCode == System.Net.HttpStatusCode.Unauthorized)
                return OperationResult.Unauthorized;
            else return OperationResult.GenericError;
        }
    }
    

    We use the native HttpClient .NET class to send our string to the remote endpoint, and then we fetch the result and map it to an enum, OperationResult.

    Hey, have you noticed it? I used an asynchronous method in a synchronous one using httpClient.SendAsync(message).Result. But it’s the wrong way to do it! If you want to know more, head to my article First steps with asynchronous programming in C#

    Finally, the main method.

    void Main()
    {
        (bool fileExists, Stream fileStream) = ReadDataFromFile("C:\some-path");
        if (fileExists)
        {
            string fileContent = ConvertStreamIntoString(fileStream);
            if (!string.IsNullOrWhiteSpace(fileContent))
            {
                var operationResult = SendStringToApi(fileContent);
                if (operationResult == OperationResult.Ok)
                {
                    Console.WriteLine("Yeah!");
                }
                else
                {
                    Console.WriteLine("Not able to complete the operation");
                }
            }
            else
            {
                Console.WriteLine("The file was empty");
            }
        }
        else
        {
            Console.WriteLine("File does not exist");
        }
    }
    

    Quite hard to understand, right? All those if-else do not add value to our code. We don’t manage errors in an alternate way, we just write on console that something has gone wrong. So, we can improve it by removing all those else blocks.

    void Main()
    {
        (bool fileExists, Stream fileStream) = ReadDataFromFile("C:\some-path");
        if (fileExists)
        {
            string fileContent = ConvertStreamIntoString(fileStream);
            if (!string.IsNullOrWhiteSpace(fileContent))
            {
                var operationResult = SendStringToApi(fileContent);
                if (operationResult == OperationResult.Ok)
                {
                    Console.WriteLine("Yeah!");
                }
            }
        }
    }
    

    A bit better! It definitely looks like the code I used to write. But we can do more. 💪

    A better way

    Let’s improve each step.

    Take the ReadDataFromFile method. The boolean value returned in the tuple is a flag and should be removed. How? Time to create a custom exception.

    How to call this exception? DataReadException? FileSystemException? Since we should think of the needs of the caller, not the method itself, a good name could be DataTransferException.

    Stream ReadDataFromFile(string filePath)
    {
        try
        {
            Stream stream = ReadFromFileSystem(filePath);
            if (stream != null && stream.Length > 0) return stream;
            else throw new DataTransferException($"file {filePath} not found or invalid");
        }
        catch (DataTransferException ex) { throw; }
        catch (Exception ex)
        {
            new DataTransferException($"Unable to get data from {filePath}", ex);
        }
    }
    

    We can notice 3 main things:

    1. we don’t check anymore if the filePath value is null, because we will always pass a valid string (to avoid null values as input parameters);
    2. if the stream is invalid, we throw a new DataTransferException exception with all the info we need;
    3. since we don’t know if the native classes to interact with file system will change and throw different exceptions, we wrap every error into our custom DataTransferException.

    Here I decided to remove the boolean value because we don’t have an alternate way to move on with the operations. If we had a fallback way to retrieve the stream (for example from another source) we could have kept our tuple and perform the necessary checks.

    The ConvertStreamIntoString does not so much, it just calls another method. If we have control over that ConvertToString we can handle it like we did with ReadDataFromFile. We can observe that we don’t need to check if the input stream is valid because we have already done in the ReadDataFromFile method.

    Time to update our SendStringToApi!

    Since we’re using an external class to perform HTTP requests (the native HttpClient), we’ll wrap our code into a try-catch-block and throw only exceptions of type DataTransferException; and since we don’t actually need a result, we can return void instead of that OperationResult enum.

    void SendStringToApi(string fileContent)
    {
        HttpClient httpClient = null;
        try
        {
            httpClient = new HttpClient();
            httpClient.BaseAddress = new Uri("http://some=address");
    
            HttpRequestMessage message = new HttpRequestMessage();
            message.Method = HttpMethod.Post;
            message.Content = ConvertToContent(fileContent);
            var httpResult = httpClient.SendAsync(message).Result;
    
            httpResult.EnsureSuccessStatusCode();
        }
        catch (Exception ex)
        {
            throw new DataTransferException("Unable to send data to the endpoint", ex);
        }
        finally
        {
            httpClient.Dispose();
        }
    }
    

    Now we can finally update our Main method and remove all that clutter that did not bring any value to our code:

    void Main()
    {
        try
        {
            Stream fileStream = ReadDataFromFile("C:\some-path");
            string fileContent = ConvertStreamIntoString(fileStream);
            SendStringToApi(fileContent);
            Console.WriteLine("Yeah!");
        }
        catch (DataTransferException dtex)
        {
            Console.WriteLine($"Unable to complete the transfer: {dtex.Message}");
        }
        catch (Exception ex)
        {
            Console.WriteLine($"An error occurred: {ex.Message}");
        }
    }
    

    Finally, someone who reads our code has a clear idea of what’s going on, and how information pass through one step or another.

    Much better, isn’t it?

    Wrapping up

    We’ve seen that writing good error handling is not as easy as it seems. You must consider a lot of things, like

    • choosing if using only exceptions or rely also on status codes
    • define which exceptions a method should throw and which ones it should catch (you can use TDD to plan for them easily)

    Also, remember that

    • external libraries may change or may be cumbersome, so you’d better wrap external classes into custom ones
    • exceptions should be client-oriented, to help callers understand what’s going on without unnecessary details

    Happy coding!



    Source link

  • Clean code tips – Tests | Code4IT


    Tests are as important as production code. Well, they are even more important! So writing them well brings lots of benefits to your projects.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Clean code principles apply not only to production code but even to tests. Indeed, a test should be even more clean, easy-to-understand, and meaningful than production code.

    In fact, tests not only prevent bugs: they even document your application! New team members should look at tests to understand how a class, a function, or a module works.

    So, every test must have a clear meaning, must have its own raison d’être, and must be written well enough to let the readers understand it without too much fuss.

    In this last article of the Clean Code Series, we’re gonna see some tips to improve your tests.

    If you are interested in more tips about Clean Code, here are the other articles:

    1. names and function arguments
    2. comments and formatting
    3. abstraction and objects
    4. error handling
    5. tests

    Why you should keep tests clean

    As I said before, tests are also meant to document your code: given a specific input or state, they help you understand what the result will be in a deterministic way.

    But, since tests are dependent on the production code, you should adapt them when the production code changes: this means that tests must be clean and flexible enough to let you update them without big issues.

    If your test suite is a mess, even the slightest update in your code will force you to spend a lot of time updating your tests: that’s why you should organize your tests with the same care as your production code.

    Good tests have also a nice side effect: they make your code more flexible. Why? Well, if you have a good test coverage, and all your tests are meaningful, you will be more confident in applying changes and adding new functionalities. Otherwise, when you change your code, you will not be sure not only that the new code works as expected, but that you have not introduced any regression.

    So, having a clean, thorough test suite is crucial for the life of your application.

    How to keep tests clean

    We’ve seen why we should write clean tests. But how should you write them?

    Let’s write a bad test:

    [Test]
    public void CreateTableTest()
    {
        //Arrange
        string tableContent = @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColB</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    
        var tableInfo = new TableInfo(2);
    
    
        HtmlDocument doc = new HtmlDocument();
        doc.LoadHtml(tableContent);
        var node = doc.DocumentNode.ChildNodes[0];
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    

    This test proves that the CreateTableInfo method of the TableInfoCreator class parses correctly the HTML passed in input and returns a TableInfo object that contains info about rows and headers.

    This is kind of a mess, isn’t it? Let’s improve it.

    Use appropriate test names

    What does CreateTableTest do? How does it help the reader understand what’s going on?

    We need to explicitly say what the tests want to achieve. There are many ways to do it; one of the most used is the Given-When-Then pattern: every method name should express those concepts, possibly in a consistent way.

    I like to use always the same format when naming tests: {Something}_Should_{DoSomething}_When_{Condition}. This format explicitly shows what and why the test exists.

    So, let’s change the name:

    [Test]
    public void CreateTableInfo_Should_CreateTableInfoWithCorrectHeadersAndRows_When_TableIsWellFormed()
    {
        //Arrange
        string tableContent = @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColB</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    
        var tableInfo = new TableInfo(2);
    
    
        HtmlDocument doc = new HtmlDocument();
        doc.LoadHtml(tableContent);
        HtmlNode node = doc.DocumentNode.ChildNodes[0];
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    

    Now, just by reading the name of the test, we know what to expect.

    Initialization

    The next step is to refactor the tests to initialize all the stuff in a better way.

    The first step is to remove the creation of the HtmlNode seen in the previous example, and move it to an external function: this will reduce code duplication and help the reader understand the test without worrying about the HtmlNode creation details:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        //Arrange
        string tableContent = @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColB</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    
        var tableInfo = new TableInfo(2);
    
     // HERE!
        HtmlNode node = CreateNodeElement(tableContent);
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    
    
    private static HtmlNode CreateNodeElement(string content)
    {
        HtmlDocument doc = new HtmlDocument();
        doc.LoadHtml(content);
        return doc.DocumentNode.ChildNodes[0];
    }
    

    Then, depending on what you are testing, you could even extract input and output creation into different methods.

    If you extract them, you may end up with something like this:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        var node = CreateWellFormedHtmlTable();
    
        var part = new TableInfoCreator(node);
    
        var result = part.CreateTableInfo();
    
        TableInfo tableInfo = CreateWellFormedTableInfo();
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    
    private static TableInfo CreateWellFormedTableInfo()
    {
        var tableInfo = new TableInfo(2);
        tableInfo.SetHeaders(new string[] { "ColA", "ColB" });
        tableInfo.AddRow(new string[] { "Text1A", "Text1B" });
        tableInfo.AddRow(new string[] { "Text2A", "Text2B" });
        return tableInfo;
    }
    
    private HtmlNode CreateWellFormedHtmlTable()
    {
        var table = CreateWellFormedTable();
        return CreateNodeElement(table);
    }
    
    private static string CreateWellFormedTable()
        => @"<table>
            <thead>
                <tr>
                    <th>ColA</th>
                    <th>ColA</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td>Text1A</td>
                    <td>Text1B</td>
                </tr>
                <tr>
                    <td>Text2A</td>
                    <td>Text2B</td>
                </tr>
            </tbody>
        </table>";
    

    So, now, the general structure of the test is definitely better. But, to understand what’s going on, readers have to jump to the details of both CreateWellFormedHtmlTable and CreateWellFormedTableInfo.

    Even worse, you have to duplicate those methods for every test case. You could do a further step by joining the input and the output into a single object:

    
    public class TableTestInfo
    {
        public HtmlNode Html { get; set; }
        public TableInfo ExpectedTableInfo { get; set; }
    }
    
    private TableTestInfo CreateTestInfoForWellFormedTable() =>
    new TableTestInfo
    {
        Html = CreateWellFormedHtmlTable(),
        ExpectedTableInfo = CreateWellFormedTableInfo()
    };
    

    and then, in the test, you simplify everything in this way:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        var testTableInfo = CreateTestInfoForWellFormedTable();
    
        var part = new TableInfoCreator(testTableInfo.Html);
    
        var result = part.CreateTableInfo();
    
        TableInfo tableInfo = testTableInfo.ExpectedTableInfo;
    
        result.Should().BeEquivalentTo(tableInfo);
    }
    

    In this way, you have all the info in a centralized place.

    But, sometimes, this is not the best way. Or, at least, in my opinion.

    In the previous example, the most important part is the elaboration of a specific input. So, to help readers, I usually prefer to keep inputs and outputs listed directly in the test method.

    On the contrary, if I had to test for some properties of a class or method (for instance, test that the sorting of an array with repeated values works as expected), I’d extract the initializations outside the test methods.

    AAA: Arrange, Act, Assert

    A good way to write tests is to write them with a structured and consistent template. The most used way is the Arrange-Act-Assert pattern:

    That means that in the first part of the test you set up the objects and variables that will be used; then, you’ll perform the operation under test; finally, you check if the test passes by using assertion (like a simple Assert.IsTrue(condition)).

    I prefer to explicitly write comments to separate the 3 parts of each test, like this:

    [Test]
    public void CreateTableInfo_Should_CreateTableWithHeadersAndRows_When_TableIsWellFormed()
    {
        // Arrange
        var testTableInfo = CreateTestInfoForWellFormedTable();
        TableInfo expectedTableInfo = testTableInfo.ExpectedTableInfo;
    
        var part = new TableInfoCreator(testTableInfo.Html);
    
        // Act
        var actualResult = part.CreateTableInfo();
    
        // Assert
        actualResult.Should().BeEquivalentTo(expectedTableInfo);
    }
    

    Only one assertion per test (with some exceptions)

    Ideally, you may want to write tests with only a single assertion.

    Let’s take as an example a method that builds a User object using the parameters in input:

    public class User
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime BirthDate { get; set; }
        public Address AddressInfo { get; set; }
    }
    
    public class Address
    {
        public string Country { get; set; }
        public string City { get; set; }
    }
    
    public User BuildUser(string name, string lastName, DateTime birthdate, string country, string city)
    {
        return new User
        {
            FirstName = name,
            LastName = lastName,
            BirthDate = birthdate,
            AddressInfo = new Address
            {
                Country = country,
                City = city
            }
        };
    }
    

    Nothing fancy, right?

    So, ideally, we should write tests with a single assert (ignore in the next examples the test names – I removed the when part!):

    [Test]
    public void BuildUser_Should_CreateUserWithCorrectName()
    {
        // Arrange
        var name = "Davide";
    
        // Act
        var user = BuildUser(name, null, DateTime.Now, null, null);
    
        // Assert
        user.FirstName.Should().Be(name);
    }
    
    [Test]
    public void BuildUser_Should_CreateUserWithCorrectLastName()
    {
        // Arrange
        var lastName = "Bellone";
    
        // Act
        var user = BuildUser(null, lastName, DateTime.Now, null, null);
    
        // Assert
        user.LastName.Should().Be(lastName);
    }
    

    … and so on. Imagine writing a test for each property: your test class will be full of small methods that only clutter the code.

    If you can group assertions in a logical way, you could write more asserts in a single test:

    [Test]
    public void BuildUser_Should_CreateUserWithCorrectPlainInfo()
    {
        // Arrange
        var name = "Davide";
        var lastName = "Bellone";
        var birthDay = new DateTime(1991, 1, 1);
    
        // Act
        var user = BuildUser(name, lastName, birthDay, null, null);
    
        // Assert
        user.FirstName.Should().Be(name);
        user.LastName.Should().Be(lastName);
        user.BirthDate.Should().Be(birthDay);
    }
    

    This is fine because the three properties (FirstName, LastName, and BirthDate) are logically on the same level and with the same meaning.

    One concept per test

    As we stated before, it’s not important to test only one property per test: each and every test must be focused on a single concept.

    By looking at the previous examples, you can notice that the AddressInfo property is built using the values passed as parameters on the BuildUser method. That makes it a good candidate for its own test.

    Another way of seeing this tip is thinking of the properties of an object (I mean, the mathematical properties). If you’re creating your custom sorting, think of which properties can be applied to your method. For instance:

    • an empty list, when sorted, is still an empty list
    • an item with 1 item, when sorted, still has one item
    • applying the sorting to an already sorted list does not change the order

    and so on.

    So you don’t want to test every possible input but focus on the properties of your method.

    In a similar way, think of a method that gives you the number of days between today and a certain date. In this case, just a single test is not enough.

    You have to test – at least – what happens if the other date:

    • is exactly today
    • it is in the future
    • it is in the past
    • it is next year
    • it is February, the 29th of a valid year (to check an odd case)
    • it is February, the 30th (to check an invalid date)

    Each of these tests is against a single value, so you might be tempted to put everything in a single test method. But here you are running tests against different concepts, so place every one of them in a separate test method.

    Of course, in this example, you must not rely on the native way to get the current date (in C#, DateTime.Now or DateTime.UtcNow). Rather, you have to mock the current date.

    FIRST tests: Fast, Independent, Repeatable, Self-validating, and Timed

    You’ll often read the word FIRST when talking about the properties of good tests. What does FIRST mean?

    It is simply an acronym. A test must be Fast, Independent, Repeatable, Self-validating, and Timed.

    Fast

    Tests should be fast. How much? Enough to don’t discourage the developers to run them. This property applies only to Unit Tests: in fact, while each test should run in less than 1 second, you may have some Integration and E2E tests that take more than 10 seconds – it depends on what you’re testing.

    Now, imagine if you have to update one class (or one method), and you have to re-run all your tests. If the whole tests suite takes just a few seconds, you can run them whenever you want – some devs run all the tests every time they hit Save; if every single test takes 1 second to run, and you have 200 tests, just a simple update to one class makes you lose at least 200 seconds: more than 3 minutes. Yes, I know that you can run them in parallel, but that’s not the point!

    So, keep your tests short and fast.

    Independent

    Every test method must be independent of the other tests.

    This means that the result and the execution of one method must not impact the execution of another one. Conversely, one method must not rely on the execution of another method.

    A concrete example?

    public class MyTests
    {
        string userName = "Lenny";
    
        [Test]
        public void Test1()
        {
            Assert.AreEqual("Lenny", userName);
            userName = "Carl";
    
        }
    
        [Test]
        public void Test2()
        {
            Assert.AreEqual("Carl", userName);
        }
    
    }
    

    Those tests are perfectly valid if run in sequence. But Test1 affects the execution of Test2 by setting a global variable
    used by the second method. But what happens if you run only Test2? It will fail. Same result if the tests are run in a different order.

    So, you can transform the previous method in this way:

    public class MyTests
    {
        string userName;
    
        [SetUp]
        public void Setup()
        {
            userName = "Boe";
        }
    
        [Test]
        public void Test1()
        {
            userName = "Lenny";
            Assert.AreEqual("Lenny", userName);
    
        }
    
        [Test]
        public void Test2()
        {
            userName = "Carl";
            Assert.AreEqual("Carl", userName);
        }
    
    }
    

    In this way, we have a default value, Boe, that gets overridden by the single methods – only when needed.

    Repeatable

    Every Unit test must be repeatable: this means that you must be able to run them at any moment and on every machine (and get always the same result).

    So, avoid all the strong dependencies on your machine (like file names, absolute paths, and so on), and everything that is not directly under your control: the current date and time, random-generated numbers, and GUIDs.

    To work with them there’s only a solution: abstract them and use a mocking mechanism.

    If you want to learn 3 ways to do this, check out my 3 ways to inject DateTime and test it. There I explained how to inject DateTime, but the same approaches work even for GUIDs and random numbers.

    Self-validating

    You must be able to see the result of a test without performing more actions by yourself.

    So, don’t write your test results on an external file or source, and don’t put breakpoints on your tests to see if they’ve passed.

    Just put meaningful assertions and let your framework (and IDE) tell you the result.

    Timely

    You must write your tests when required. Usually, when using TDD, you write your tests right before your production code.

    So, this particular property applies only to devs who use TDD.

    Wrapping up

    In this article, we’ve seen that even if many developers consider tests redundant and not worthy of attention, they are first-class citizens of our applications.

    Paying enough attention to tests brings us a lot of advantages:

    • tests document our code, thus helping onboarding new developers
    • they help us deploy with confidence a new version of our product, without worrying about regressions
    • they prove that our code has no bugs (well, actually you’ll always have a few bugs, it’s just that you haven’t discovered them yet )
    • code becomes more flexible and can be extended without too many worries

    So, write meaningful tests, and always well written.

    Quality over quantity, always!

    Happy coding!



    Source link

  • 13 tips for delivering better tech talks | Code4IT


    Doing a tech talk is easy. Doing a good talk is harder. We’re going to see some tips to improve the delivery of your conferences.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    I love to deliver tech talks: they help me improve both my technical and communication skills.

    Hey! If you’re starting doing tech talks, don’t miss my article Thoughts after my very first public speech where I explained what I did right and what I did wrong at my very first tech talk. Learn from my errors, and avoid them!💪

    On one hand, teaching stuff requires technical preparations: you need to know what you’re talking about, and you need to know it pretty well. Even more, you need to know some advanced stuff to give the audience something they will remember – if everything is obvious, what will they remember from your talk?

    On the other hand, tech talks require good communication skills: your job is to deliver a message to your audience, and you can do it only if your intent is clear and you avoid talking of useless (or misleading) stuff.

    But, in the end, only having good content is not enough: you need to shape the talk in a way that stimulates the attention of the public and does not bore them.

    note: I still have a lot of room for improvement, so I still have to work on myself to improve my talks!

    1- Tell what are the topics covered by your talk

    Why should someone attend your talk?

    This is a simple question, but it must be clear to you way before submitting your talk to CFPs. Usually, the best reason to attend is because of the content of the conference (unless you attend a conference only for the free pizza and swags!).

    You should always express what is the topic of your talk.

    Where, and when?

    1. In the title: the title should express what you’re going to say. «Azure DevOps: an intro to build and release pipelines» is better than «Let’s work with Azure DevOps!». Yes, it’s less fancy, but you are making the scope clear (build and release pipelines), the tool (Azure DevOps), and the difficulty of your talk (it’s an intro, not a talk that targets experts)
    2. In the description of your talk: when submitting CFP, when sharing it on social media, and everywhere else you can add some text to describe your talk, you should add some more details. For instance, «In this session, we’re gonna see how to build and release .NET Core projects with Azure DevOps pipelines, how to use PR builds, how to manage variable substitution with Variable Groups…». This will help the reader decide whether or not attending to your session.
    3. At the beginning of your talk: this is for people who forgot to read the session description. Repeat the points you’re gonna cover at the beginning of your talk, like right after the title and the slide about who are you. In this way, attendees can leave if they find out that the topic is not what they were expecting from the title. They don’t lose time on anything not interesting for them, and you don’t lose your focus watching at their bored faces.

    2- Divide the talks into smaller blocks

    Think of your own experience: are you able to keep the focus on a 1-hour long talk? Or do you get distracted after 10 minutes, start wandering with the mind, and so on?

    Well, that’s normal. Generally, people have a short attention span. This means that you cannot talk for 60 minutes about the same topic: your audience will get bored soon.

    So, you should split your talk into several smaller blocks. A good idea is to separate the sub-topics into 5 or 10 minutes slots, to help people understanding the precise topic of a block and, in case, pay less attention to that specific block (maybe because that’s a topic they already know, so not focusing 100% is fine).

    3- Wake up the audience with simple questions

    Sometimes the easiest way to regain the attention of the attendees is to ask them some simple questions: «Can you see my screen?», «Does any of you already used this tool?».

    It’s easy to reply to these questions, even without thinking too much about the answer.

    This kind of questions will wake up the audience and let them focus on what you’re saying for a bit more.

    Needless to say, avoid asking those questions too many times, and don’t repeat always the same question.

    4- Choose the right slide layout

    Many monitors and screens are now in 16:9. So remember to adapt the slide layout to that format.

    In the image below, we can see how the slide layout impacts the overall look: slides with a 4:3 layout are too small for current devices, and they just look… ugly!

    The right format impacts how the slides are viewed on different devices

    Slides in 16:9 feel more natural for many screen layouts.

    It’s a simple trick to remember, but it may have a great impact on your delivery.

    5- Don’t move hands and body if it’s not necessary

    Moving too much your body drives the attention away from the content of your talk. Avoid fidgeting, moving too much your hands and head.

    Stop fidgeting!

    Remember that every movement of your body should have a meaning. Use your movements to drive attention to a specific topic, or to imitate and explain some details.
    For instance, use your hands to simulate how some modules communicate with each other.

    When preparing your presentation, you are used to thinking of how you see the screen: you have your monitor size and resolution, and you can adjust your content based on that info.

    But you don’t know how the audience will see your screen.

    If you are doing an in-person talk, pay attention to the screens the audience sees: is the resolution fine? Do you have to increase the font size? Is it fine both for folks on the front and the last seats?

    On the contrary, when doing an online talk, you don’t know the device your audience will use: PC, tablet, smart tv, smartphone?

    This means that you can’t rely on the mouse cursor to point at a specific part of your monitor (eg: some text, a button, a menu item) as your audience may not see it.

    Where is the cursor?

    A good idea is to use some kind of tools like ZoomIt: it allows you to zoom in a part of your screen and to draw lines in a virtual layer.

    So, instead of saying «now click this button – hey, can you see my cursor?», use Zoomit to zoom on that button or, even better, to draw a rectangle or an arrow to highlight it.

    7- Pin presentation folder on Resource Explorer

    As we’ve already discussed in my article 10 underestimated tasks to do before your next virtual presentation, you should hide all the desktop icons – they tend to distract the audience. This also implies that even your folder you use to store the presentation assets has to be hidden.

    But now… Damn, you’ve just closed the folder with all the conference assets! Now you have to find it again and navigate through your personal folders.

    If you use Windows, luckily you can simply right-click on your folder, click Pin to Quick access

    Click &ldquo;Pin to quick access&rdquo;

    and have it displayed on the right bar of any folder you open.

    Folder displayed as Pinned Folder

    In this way, you can easily reach any folder with just one click.

    So your “main” folder will not be visible on your desktop, but you can still open it via the Quick Access panel.

    8- Stress when a topic is important

    You have created the presentation. You know why you built it, and what are the important stuff. Does your audience know what is important to remember?

    If you are talking for one hour, you are giving the public a lot of information. Some are trivia, some are niche details, some are the key point of a topic.

    So, make it clear what is important to remember and what is just a “good-to-know”.

    For instance, when talking about clean code, stress why it is important to follow a certain rule if it can be a game-changer. «Use consistent names when classes have similar meaning» and «Choose whether using tabs or spaces, and use them for all your files» are both valid tips, but the first one has a different weight compared to the latter one.

    Again, spend more time on the important stuff, and tell explicitly the audience that that part is important (and why).

    9- Use the slide space in the best way possible

    Let’s talk about the size of the slides’ font: keep it consistent or adapt it to the text and space in the slide?

    I thought that keeping it consistent was a good idea – somehow it hurts my brain seeing different sizes in different slides.

    But then I realized that there are some exceptions: for example, when a slide contains only a few words or a few points in a bullet list. In that case, you should occupy the space in a better way, to avoid all the emptiness around your text.

    Here we have 2 slides with the same font:

    Two slides with the same font size

    The first one is fine, the second one is too empty.

    Let’s adjust the font of the second slide:

    Two slides with different font size

    It’s a bit better. Not excellent, but at least the audience can read it. The text is a bit bigger, but you’ll hardly notice it.

    10- Turn off all the notifications

    It’s simple: if you are sharing your screen, you don’t want your audience to see those weird messages you receive on Discord or the spam emails on Outlook.

    So, turn off all the notifications. Of course, unless you are demonstrating how to integrate your stuff with platforms like Slack, Teams et cetera.

    11- Use the slides as a reference, not as a teleprompter

    Avoid full sentences in your slides. Nobody’s gonna read them – even more, if the audience is not paying attention!

    So, prefer putting just a few words instead of full, long sentences: you should not read your slides as if they were a teleprompter, and you should help your audience getting back on track if they lost their focus.

    Two bullet points like “Keep track of your progress” and “Fix weakness” are better than a single phrase like “Remember to use some tool to keep track of the progress of your project, so that you can find the weak points and fix them”.

    – of course, unless it is a quote: you should write the full text if it is important.

    12- “End” is the word

    We’re nearly at the end of this session.

    A simple yet powerful statement that can wake up your audience.

    When you’ve lost your focus, you get triggered by some words, like end. You unconsciously remember that you are at that conference for some reason, and you have to focus to get the most from the last minutes of the conference.

    So, try triggering the subconscious of your audience with some words like ending.

    13- Recap what you’ve explained

    Finally, you’re at the end of your talk.

    What should the audience remember from it?

    Spend some time to recap what you’ve seen, what are the key points of your conference, and what you’d like the others to remember.

    It is a good way to help the audience focus again and thinking of questions to bring to your attention.

    Wrapping up

    In this article, I’ve summarized some of the things I’ve worked on to improve my tech talks.

    There is still a lot to do, and a lot to learn. But I hope that those simple tricks will help other newbies like me to improve their talks.

    If you are interesting on learning from a great speaker, you should definitely watch Scott Hanselman’s “The Art of Speaking” course on Pluralsight.

    Do you have any other resources to share? The comment section is here for you!



    Source link

  • 7 Must-Know GSAP Animation Tips for Creative Developers

    7 Must-Know GSAP Animation Tips for Creative Developers


    Today we’re going to go over some of my favorite GSAP techniques that can bring you great results with just a little code.

    Although the GSAP documentation is among the best, I find that developers often overlook some of GSAP’s greatest features or perhaps struggle with finding their practical application. 

    The techniques presented here will be helpful to GSAP beginners and seasoned pros. It is recommended that you understand the basics of loading GSAP and working with tweens, timelines and SplitText. My free beginner’s course GSAP Express will guide you through everything you need for a firm foundation.

    If you prefer a video version of this tutorial, you can watch it here:

    https://www.youtube.com/watch?v=EKjYspj9MaM

    Tip 1: SplitText Masking

    GSAP’s SplitText just went through a major overhaul. It has 14 new features and weighs in at roughly 7kb.

    SplitText allows you to split HTML text into characters, lines, and words. It has powerful features to support screen-readers, responsive layouts, nested elements, foreign characters, emoji and more.

    My favorite feature is its built-in support for masking (available in SplitText version 3.13+).

    Prior to this version of SplitText you would have to manually nest your animated text in parent divs that have overflow set to hidden or clip in the css.

    SplitText now does this for you by creating “wrapper divs” around the elements that we apply masking to.

    Basic Implementation

    The code below will split the h1 tag into chars and also apply a mask effect, which means the characters will not be visible when they are outside their bounding box.

    const split = SplitText.create("h1", {
    	type:"chars",
    	mask:"chars"
    })

    Demo: Split Text Masking (Basic)

    See the Pen
    Codrops Tip 1: Split Text Masking – Basic by Snorkl.tv (@snorkltv)
    on CodePen.

    This simple implementation works great and is totally fine.

    However, if you inspect the DOM you will see that 2 new <div> elements are created for each character:

    • an outer div with overflow:clip
    • an inner div with text 

    With 17 characters to split this creates 34 divs as shown in the simplified DOM structure below

    <h1>SplitText Masking
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>S</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>p</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>l</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>i</div>
    	</div>
    	<div> <!-- char wrapper with overflow:clip -->
    		<div>t</div>
    	</div>	
    	...
    </h1>

    The More Efficient Approach

    If you want to minimize the amount of DOM elements created you can split your text into characters and lines. Then you can just set the masking on the lines element like so:

    const split = SplitText.create("h1", {
    	type:"chars, lines",
    	mask:"lines"
    })

    Demo: Split Text Masking (Better with chars and lines)

    See the Pen
    Codrops Tip 1: Split Text Masking – Better with chars and lines by Snorkl.tv (@snorkltv)
    on CodePen.

    Now if you inspect the DOM you will see that there is

    • 1 line wrapper div with overflow:clip
    • 1 line div
    • 1 div per character 

    With 17 to characters to split this creates only 19 divs in total:

    <h1>SplitText Masking
    	<div> <!-- line wrapper with overflow:clip -->
    		<div> <!-- line -->
    			<div>S</div>
    			<div>p</div>
    			<div>l</div>
    			<div>i</div>
    			<div>t</div>
    			...
    		</div> 
    	</div> 
    </h1>

    Tip 2: Setting the Stagger Direction

    From my experience 99% of stagger animations go from left to right. Perhaps that’s just because it’s the standard flow of written text.

    However, GSAP makes it super simple to add some animation pizzazz to your staggers.

    To change the direction from which staggered animations start you need to use the object-syntax for the stagger value

    Normal Stagger

    Typically the stagger value is a single number which specifies the amount of time between the start of each target element’s animation.

    gsap.to(targets, {x:100, stagger:0.2}) // 0.2 seconds between the start of each animation

    Stagger Object

    By using the stagger object we can specify multiple parameters to fine-tune our staggers such as each, amount, from, ease, grid and repeat. See the GSAP Stagger Docs for more details.
    Our focus today will be on the from property which allows us to specify from which direction our staggers should start.

    gsap.to(targets, {x:100,
       stagger: {
         each:0.2, // amount of time between the start of each animation
         from:”center” // animate from center of the targets array   
    }

    The from property in the stagger object can be any one of these string values

    • “start” (default)
    • “center”
    • “end”
    • “edges”
    • “random”

    Demo: Stagger Direction Timeline

    In this demo the characters animate in from center and then out from the edges.

    See the Pen
    Codrops Tip 2: Stagger Direction Timeline by Snorkl.tv (@snorkltv)
    on CodePen.

    Demo: Stagger Direction Visualizer

    See the Pen
    Codrops Tip 2: Stagger Direction Visualizer by Snorkl.tv (@snorkltv)
    on CodePen.

    Tip 3: Wrapping Array Values

    The gsap.utils.wrap() function allows you to pull values from an array and apply them to multiple targets. This is great for allowing elements to animate in from opposite directions (like a zipper), assigning a set of colors to multiple objects and many more creative applications.

    Setting Colors From an Array

    I love using gsap.utils.wrap() with a set() to instantly manipulate a group of elements.

    // split the header
    const split = SplitText.create("h1", {
    	type:"chars"
    })
    
    //create an array of colors
    const colors = ["lime", "yellow", "pink", "skyblue"]
    
    // set each character to a color from the colors array
    gsap.set(split.chars, {color:gsap.utils.wrap(colors)})

    When the last color in the array (skyblue) is chosen GSAP will wrap back to the beginning of the array and apply lime to the next element.

    Animating from Alternating Directions

    In the code below each target will animate in from alternating y values of -50 and 50. 

    Notice that you can define the array directly inside of the wrap() function.

    const tween = gsap.from(split.chars, {
    	y:gsap.utils.wrap([-50, 50]),
    	opacity:0,
    	stagger:0.1
    }) 

    Demo: Basic Wrap

    See the Pen
    Codrops Tip 3: Basic Wrap by Snorkl.tv (@snorkltv)
    on CodePen.

    Demo: Fancy Wrap

    In the demo below there is a timeline that creates a sequence of animations that combine stagger direction and wrap. Isn’t it amazing what GSAP allows you to do with just a few simple shapes and a few lines of code?

    See the Pen
    Codrops Tip 3: Fancy Wrap by Snorkl.tv (@snorkltv)
    on CodePen.

    As you watch the animation be sure to go through the GSAP code to see which tween is running each effect. 

    I strongly recommend editing the animation values and experimenting.

    Tip 4: Easy Randomization with the “random()” String Function

    GSAP has its own random utility function gsap.utils.random() that lets you tap into convenient randomization features anywhere in your JavaScript code.

    // generate a random number between 0 and 450
    const randomNumber = gsap.utils.random(0, 450)

    To randomize values in animations we can use the random string shortcut which saves us some typing.

    //animate each target to a random x value between 0 and 450
    gsap.to(targets, {x:"random(0, 450)"})
    
    //the third parameter sets the value to snap to
    gsap.to(targets, {x:"random(0, 450, 50)"}) // random number will be an increment of 50
    
    //pick a random value from an array for each target
    gsap.to(targets, fill:"random([pink, yellow, orange, salmon])" 

    Demo: Random String

    See the Pen
    Codrops Tip 4: Random String by Snorkl.tv (@snorkltv)
    on CodePen.

    TIP 5: repeatRefresh:true

    This next tip appears to be pure magic as it allows our animations to produce new results each time they repeat.

    GSAP internally stores the start and end values of an animation the first time it runs. This is a performance optimization so that each time it repeats there is no additional work to do. By default repeating tweens always produce the exact same results (which is a good thing).

    When dealing with dynamic or function-based values such as those generated with the random string syntax “random(0, 100)” we can tell GSAP to record new values on repeat by setting repeatRefresh:true

    You can set repeatRefresh:true in the config object of a single tween OR on a timeline.

    //use on a tween
    gsap.to(target, {x:”random(50, 100”, repeat:10, repeatRefresh:true})
    
    //use on a timeline
    const tl = gsap.timeline({repeat:10, repeatRefresh:true})

    Demo: repeatRefresh Particles

    The demo below contains a single timeline with repeatRefresh:true.

    Each time it repeats the circles get assigned a new random scale and a new random x destination.

    Be sure to study the JS code in the demo. Feel free to fork it and modify the values.

    See the Pen
    Codrops Tip 5: repeatRefresh Particles by Snorkl.tv (@snorkltv)
    on CodePen.

    TIP 6: Tween The TimeScale() of an Animation

    GSAP animations have getter / setter values that allow you to get and set properties of an animation.

    Common Getter / Setter methods:

    • paused() gets or sets the paused state
    • duration() gets or sets the duration
    • reversed() gets or sets the reversed state
    • progress() gets or sets the progress
    • timeScale() gets or sets the timeScale

    Getter Setter Methods in Usage

    animation.paused(true) // sets the paused state to true
    console.log(animation.paused()) // gets the paused state
    console.log(!animation.paused()) // gets the inverse of the paused state

    See it in Action

    In the demo from the previous tip there is code that toggles the paused state of the particle effect.

    //click to pause
    document.addEventListener("click", function(){
    	tl.paused(!tl.paused()) 
    })

    This code means “every time the document is clicked the timeline’s paused state will change to the inverse (or opposite) of what it currently is”.

    If the animation is paused, it will become “unpaused” and vice-versa.

    This works great, but I’d like to show you trick for making it less abrupt and smoothing it out.

    Tweening Numeric Getter/Setter Values

    We can’t tween the paused() state as it is either true or false.

    Where things get interesting is that we can tween numeric getter / setter properties of animations like progress() and timeScale().

    timeScale() represents a factor of an animation’s playback speed.

    • timeScale(1): playback at normal speed
    • timeScale(0.5) playback at half speed
    • timeScale(2) playback at double speed

    Setting timeScale()

    //create an animation with a duration of 5 seconds
    const animation = gsap.to(box, {x:500, duration:5})
    
    //playback at half-speed making it take 10 seconds to play
    animation.timeScale(0.5)

    Tweening timeScale()

    const animation = gsap.to(box, {x:500, duration:5}) // create a basic tween
    
    // Over the course of 1 second reduce the timeScale of the animation to 0.5
    gsap.to(animation, {timeScale:0.5, duration:1})

    Dynamically Tweening timeScale() for smooth pause and un-pause

    Instead of abruptly changing the paused state of animation as the particle demo above does we are now going to tween the timeScale() for a MUCH smoother effect.

    Demo: Particles with timeScale() Tween

    See the Pen
    Codrops Tip 6: Particles with timeScale() Tween by Snorkl.tv (@snorkltv)
    on CodePen.

    Click anywhere in the demo above to see the particles smoothly slow down and speed up on each click.

    The code below basically says “if the animation is currently playing then we will slow it down or else we will speed it up”. Every time a click happens the isPlaying value toggles between true and false so that it can be updated for the next click.

    Tip 7: GSDevTools Markers and Animation IDs

    Most of the demos in this article have used GSDevTools to help us control our animations. When building animations I just love being able to scrub at my own pace and study the sequencing of all the moving parts.

    However, there is more to this powerful tool than just scrubbing, playing and pausing.

    Markers

    The in and out markers allow us to loop ANY section of an animation. As an added bonus GSDevTools remembers the previous position of the markers so that each time we reload our animation it will start  and end at the same time.

    This makes it very easy to loop a particular section and study it.

    Image from GSDevTools Docs

    Markers are a huge advantage when building animations longer than 3 seconds.

    To explore, open The Fancy Wrap() demo in a new window, move the markers and reload.

    Important: The markers are only available on screens wider than 600px. On small screens the UI is minimized to only show basic controls.

    Setting IDs for the Animation Menu

    The animation menu allows us to navigate to different sections of our animation based on an animation id. When dealing with long-form animations this feature is an absolute life saver.

    Since GSAP’s syntax makes creating complex sequences a breeze, it is not un-common to find yourself working on animations that are beyond 10, 20 or even 60 seconds!

    To set an animation id:

    const tl = gsap.timeline({id:"fancy"})
    
    //Add the animation to GSDevTools based on variable reference
    GSDevTools.create({animation:tl})
    
    //OR add the animation GSDevTools based on id
    GSDevTools.create({animation:"fancy"})

    With the code above the name “fancy” will display in GSDevTools.

    Although you can use the id with a single timeline, this feature is most helpful when working with nested timelines as discussed below.

    Demo: GSAP for Everyone

    See the Pen
    Codrops Tip 7: Markers and Animation Menu by Snorkl.tv (@snorkltv)
    on CodePen.

    This demo is 26 seconds long and has 7 child timelines. Study the code to see how each timeline has a unique id that is displayed in the animation menu.

    Use the animation menu to navigate to and explore each section.

    Important: The animation menu is only available on screens wider than 600px.

    Hopefully you can see how useful markers and animation ids can be when working with these long-form, hand-coded animations!

    Want to Learn More About GSAP?

    I’m here to help. 

    I’ve spent nearly 5 years archiving everything I know about GSAP in video format spanning 5 courses and nearly 300 lessons at creativeCodingClub.com.

    I spent many years “back in the day” using GreenSock’s ActionScript tools as a Flash developer and this experience lead to me being hired at GreenSock when they switched to JavaScript. My time at GreenSock had me creating countless demos, videos and learning resources.

    Spending years answering literally thousands of questions in the support forums has left me with a unique ability to help developers of all skill levels avoid common pitfalls and get the most out of this powerful animation library.

    It’s my mission to help developers from all over the world discover the joy of animating with code through affordable, world-class training.

    Visit Creative Coding Club to learn more.



    Source link

  • Top 6 Performance Tips when dealing with strings in C# 12 and .NET 8 &vert; Code4IT

    Top 6 Performance Tips when dealing with strings in C# 12 and .NET 8 | Code4IT


    Small changes sometimes make a huge difference. Learn these 6 tips to improve the performance of your application just by handling strings correctly.

    Table of Contents

    Just a second! 🫷
    If you are here, it means that you are a software developer.
    So, you know that storage, networking, and domain management have a cost .

    If you want to support this blog, please ensure that you have disabled the adblocker for this site.
    I configured Google AdSense to show as few ADS as possible – I don’t want to bother you with lots of ads, but I still need to add some to pay for the resources for my site.

    Thank you for your understanding.
    Davide

    Sometimes, just a minor change makes a huge difference. Maybe you won’t notice it when performing the same operation a few times. Still, the improvement is significant when repeating the operation thousands of times.

    In this article, we will learn five simple tricks to improve the performance of your application when dealing with strings.

    Note: this article is part of C# Advent Calendar 2023, organized by Matthew D. Groves: it’s maybe the only Christmas tradition I like (yes, I’m kind of a Grinch 😂).

    Benchmark structure, with dependencies

    Before jumping to the benchmarks, I want to spend a few words on the tools I used for this article.

    The project is a .NET 8 class library running on a laptop with an i5 processor.

    Running benchmarks with BenchmarkDotNet

    I’m using BenchmarkDotNet to create benchmarks for my code. BenchmarkDotNet is a library that runs your methods several times, captures some metrics, and generates a report of the executions. If you follow my blog, you might know I’ve used it several times – for example, in my old article “Enum.HasFlag performance with BenchmarkDotNet”.

    All the benchmarks I created follow the same structure:

    [MemoryDiagnoser]
    public class BenchmarkName()
    {
        [Params(1, 5, 10)] // clearly, I won't use these values
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size, "hello!", "HELLO!");
        }
    
        [Benchmark(Baseline=true)]
        public void FirstMethod()
        {
            //omitted
        }
    
        [Benchmark]
        public void SecondMethod()
        {
            //omitted
        }
    }
    

    In short:

    • the class is marked with the [MemoryDiagnoser] attribute: the benchmark will retrieve info for both time and memory usage;
    • there is a property named Size with the attribute [Params]: this attribute lists the possible values for the Size property;
    • there is a method marked as [IterationSetup]: this method runs before every single execution, takes the value from the Size property, and initializes the AllStrings array;
    • the methods that are parts of the benchmark are marked with the [Benchmark] attribute.

    Generating strings with Bogus

    I relied on Bogus to create dummy values. This NuGet library allows you to generate realistic values for your objects with a great level of customization.

    The string array generation strategy is shared across all the benchmarks, so I moved it to a static method:

     public static class StringArrayGenerator
     {
         public static string[] Generate(int size, params string[] additionalStrings)
         {
             string[] array = new string[size];
             Faker faker = new Faker();
    
             List<string> fixedValues = [
                 string.Empty,
                 "   ",
                 "\n  \t",
                 null
             ];
    
             if (additionalStrings != null)
                 fixedValues.AddRange(additionalStrings);
    
             for (int i = 0; i < array.Length; i++)
             {
                 if (Random.Shared.Next() % 4 == 0)
                 {
                     array[i] = Random.Shared.GetItems<string>(fixedValues.ToArray(), 1).First();
                 }
                 else
                 {
                     array[i] = faker.Lorem.Word();
                 }
             }
    
             return array;
         }
     }
    

    Here I have a default set of predefined values ([string.Empty, " ", "\n \t", null]), which can be expanded with the values coming from the additionalStrings array. These values are then placed in random positions of the array.

    In most cases, though, the value of the string is defined by Bogus.

    Generating plots with chartbenchmark.net

    To generate the plots you will see in this article, I relied on chartbenchmark.net, a fantastic tool that transforms the output generated by BenchmarkDotNet on the console in a dynamic, customizable plot. This tool created by Carlos Villegas is available on GitHub, and it surely deserves a star!

    Please note that all the plots in this article have a Log10 scale: this scale allows me to show you the performance values of all the executions in the same plot. If I used the Linear scale, you would be able to see only the biggest values.

    We are ready. It’s time to run some benchmarks!

    Tip #1: StringBuilder is (almost always) better than String Concatenation

    Let’s start with a simple trick: if you need to concatenate strings, using a StringBuilder is generally more efficient than concatenating string.

    [MemoryDiagnoser]
    public class StringBuilderVsConcatenation()
    {
        [Params(4, 100, 10_000, 100_000)]
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size, "hello!", "HELLO!");
        }
    
        [Benchmark]
        public void WithStringBuilder()
        {
            StringBuilder sb = new StringBuilder();
    
            foreach (string s in AllStrings)
            {
                sb.Append(s);
            }
    
            var finalString = sb.ToString();
        }
    
        [Benchmark]
        public void WithConcatenation()
        {
            string finalString = "";
            foreach (string s in AllStrings)
            {
                finalString += s;
            }
        }
    }
    

    Whenever you concatenate strings with the + sign, you create a new instance of a string. This operation takes some time and allocates memory for every operation.

    On the contrary, using a StringBuilder object, you can add the strings in memory and generate the final string using a performance-wise method.

    Here’s the result table:

    Method Size Mean Error StdDev Median Ratio RatioSD Allocated Alloc Ratio
    WithStringBuilder 4 4.891 us 0.5568 us 1.607 us 4.750 us 1.00 0.00 1016 B 1.00
    WithConcatenation 4 3.130 us 0.4517 us 1.318 us 2.800 us 0.72 0.39 776 B 0.76
    WithStringBuilder 100 7.649 us 0.6596 us 1.924 us 7.650 us 1.00 0.00 4376 B 1.00
    WithConcatenation 100 13.804 us 1.1970 us 3.473 us 13.800 us 1.96 0.82 51192 B 11.70
    WithStringBuilder 10000 113.091 us 4.2106 us 12.081 us 111.000 us 1.00 0.00 217200 B 1.00
    WithConcatenation 10000 74,512.259 us 2,111.4213 us 6,058.064 us 72,593.050 us 666.43 91.44 466990336 B 2,150.05
    WithStringBuilder 100000 1,037.523 us 37.1009 us 108.225 us 1,012.350 us 1.00 0.00 2052376 B 1.00
    WithConcatenation 100000 7,469,344.914 us 69,720.9843 us 61,805.837 us 7,465,779.900 us 7,335.08 787.44 46925872520 B 22,864.17

    Let’s see it as a plot.

    Beware of the scale in the diagram!: it’s a Log10 scale, so you’d better have a look at the value displayed on the Y-axis.

    StringBuilder vs string concatenation in C#: performance benchmark

    As you can see, there is a considerable performance improvement.

    There are some remarkable points:

    1. When there are just a few strings to concatenate, the + operator is more performant, both on timing and allocated memory;
    2. When you need to concatenate 100000 strings, the concatenation is ~7000 times slower than the string builder.

    In conclusion, use the StringBuilder to concatenate more than 5 or 6 strings. Use the string concatenation for smaller operations.

    Edit 2024-01-08: turn out that string.Concat has an overload that accepts an array of strings. string.Concat(string[]) is actually faster than using the StringBuilder. Read more this article by Robin Choffardet.

    Tip #2: EndsWith(string) vs EndsWith(char): pick the right overload

    One simple improvement can be made if you use StartsWith or EndsWith, passing a single character.

    There are two similar overloads: one that accepts a string, and one that accepts a char.

    [MemoryDiagnoser]
    public class EndsWithStringVsChar()
    {
        [Params(100, 1000, 10_000, 100_000, 1_000_000)]
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size);
        }
    
        [Benchmark(Baseline = true)]
        public void EndsWithChar()
        {
        foreach (string s in AllStrings)
        {
            _ = s?.EndsWith('e');
        }
        }
    
        [Benchmark]
        public void EndsWithString()
        {
        foreach (string s in AllStrings)
        {
            _ = s?.EndsWith("e");
        }
        }
    }
    

    We have the following results:

    Method Size Mean Error StdDev Median Ratio
    EndsWithChar 100 2.189 us 0.2334 us 0.6771 us 2.150 us 1.00
    EndsWithString 100 5.228 us 0.4495 us 1.2970 us 5.050 us 2.56
    EndsWithChar 1000 12.796 us 1.2006 us 3.4831 us 12.200 us 1.00
    EndsWithString 1000 30.434 us 1.8783 us 5.4492 us 29.250 us 2.52
    EndsWithChar 10000 25.462 us 2.0451 us 5.9658 us 23.950 us 1.00
    EndsWithString 10000 251.483 us 18.8300 us 55.2252 us 262.300 us 10.48
    EndsWithChar 100000 209.776 us 18.7782 us 54.1793 us 199.900 us 1.00
    EndsWithString 100000 826.090 us 44.4127 us 118.5465 us 781.650 us 4.14
    EndsWithChar 1000000 2,199.463 us 74.4067 us 217.0480 us 2,190.600 us 1.00
    EndsWithString 1000000 7,506.450 us 190.7587 us 562.4562 us 7,356.250 us 3.45

    Again, let’s generate the plot using the Log10 scale:

    EndsWith(char) vs EndsWith(string) in C# performance benchmark

    They appear to be almost identical, but look closely: based on this benchmark, when we have 10000, using EndsWith(string) is 10x slower than EndsWith(char).

    Also, here, the duration ratio on the 1.000.000-items array is ~3.5. At first, I thought there was an error on the benchmark, but when rerunning it on the benchmark, the ratio did not change.

    It looks like you have the best improvement ratio when the array has ~10.000 items.

    Tip #3: IsNullOrEmpty vs IsNullOrWhitespace vs IsNullOrEmpty + Trim

    As you might know, string.IsNullOrWhiteSpace performs stricter checks than string.IsNullOrEmpty.

    (If you didn’t know, have a look at this quick explanation of the cases covered by these methods).

    Does it affect performance?

    To demonstrate it, I have created three benchmarks: one for string.IsNullOrEmpty, one for string.IsNullOrWhiteSpace, and another one that lays in between: it first calls Trim() on the string, and then calls string.IsNullOrEmpty.

    [MemoryDiagnoser]
    public class StringEmptyBenchmark
    {
        [Params(100, 1000, 10_000, 100_000, 1_000_000)]
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size);
        }
    
        [Benchmark(Baseline = true)]
        public void StringIsNullOrEmpty()
        {
            foreach (string s in AllStrings)
            {
                _ = string.IsNullOrEmpty(s);
            }
        }
    
        [Benchmark]
        public void StringIsNullOrEmptyWithTrim()
        {
            foreach (string s in AllStrings)
            {
                _ = string.IsNullOrEmpty(s?.Trim());
            }
        }
    
        [Benchmark]
        public void StringIsNullOrWhitespace()
        {
            foreach (string s in AllStrings)
            {
                _ = string.IsNullOrWhiteSpace(s);
            }
        }
    }
    

    We have the following values:

    Method Size Mean Error StdDev Ratio
    StringIsNullOrEmpty 100 1.723 us 0.2302 us 0.6715 us 1.00
    StringIsNullOrEmptyWithTrim 100 2.394 us 0.3525 us 1.0282 us 1.67
    StringIsNullOrWhitespace 100 2.017 us 0.2289 us 0.6604 us 1.45
    StringIsNullOrEmpty 1000 10.885 us 1.3980 us 4.0781 us 1.00
    StringIsNullOrEmptyWithTrim 1000 20.450 us 1.9966 us 5.8240 us 2.13
    StringIsNullOrWhitespace 1000 13.160 us 1.0851 us 3.1482 us 1.34
    StringIsNullOrEmpty 10000 18.717 us 1.1252 us 3.2464 us 1.00
    StringIsNullOrEmptyWithTrim 10000 52.786 us 1.2208 us 3.5222 us 2.90
    StringIsNullOrWhitespace 10000 46.602 us 1.2363 us 3.4668 us 2.54
    StringIsNullOrEmpty 100000 168.232 us 12.6948 us 36.0129 us 1.00
    StringIsNullOrEmptyWithTrim 100000 439.744 us 9.3648 us 25.3182 us 2.71
    StringIsNullOrWhitespace 100000 394.310 us 7.8976 us 20.5270 us 2.42
    StringIsNullOrEmpty 1000000 2,074.234 us 64.3964 us 186.8257 us 1.00
    StringIsNullOrEmptyWithTrim 1000000 4,691.103 us 112.2382 us 327.4040 us 2.28
    StringIsNullOrWhitespace 1000000 4,198.809 us 83.6526 us 161.1702 us 2.04

    As you can see from the Log10 table, the results are pretty similar:

    string.IsNullOrEmpty vs string.IsNullOrWhiteSpace vs Trim in C#: performance benchmark

    On average, StringIsNullOrWhitespace is ~2 times slower than StringIsNullOrEmpty.

    So, what should we do? Here’s my two cents:

    1. For all the data coming from the outside (passed as input to your system, received from an API call, read from the database), use string.IsNUllOrWhiteSpace: this way you can ensure that you are not receiving unexpected data;
    2. If you read data from an external API, customize your JSON deserializer to convert whitespace strings as empty values;
    3. Needless to say, choose the proper method depending on the use case. If a string like “\n \n \t” is a valid value for you, use string.IsNullOrEmpty.

    Tip #4: ToUpper vs ToUpperInvariant vs ToLower vs ToLowerInvariant: they look similar, but they are not

    Even though they look similar, there is a difference in terms of performance between these four methods.

    [MemoryDiagnoser]
    public class ToUpperVsToLower()
    {
        [Params(100, 1000, 10_000, 100_000, 1_000_000)]
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size);
        }
    
        [Benchmark]
        public void WithToUpper()
        {
            foreach (string s in AllStrings)
            {
                _ = s?.ToUpper();
            }
        }
    
        [Benchmark]
        public void WithToUpperInvariant()
        {
            foreach (string s in AllStrings)
            {
                _ = s?.ToUpperInvariant();
            }
        }
    
        [Benchmark]
        public void WithToLower()
        {
            foreach (string s in AllStrings)
            {
                _ = s?.ToLower();
            }
        }
    
        [Benchmark]
        public void WithToLowerInvariant()
        {
            foreach (string s in AllStrings)
            {
                _ = s?.ToLowerInvariant();
            }
        }
    }
    

    What will this benchmark generate?

    Method Size Mean Error StdDev Median P95 Ratio
    WithToUpper 100 9.153 us 0.9720 us 2.789 us 8.200 us 14.980 us 1.57
    WithToUpperInvariant 100 6.572 us 0.5650 us 1.639 us 6.200 us 9.400 us 1.14
    WithToLower 100 6.881 us 0.5076 us 1.489 us 7.100 us 9.220 us 1.19
    WithToLowerInvariant 100 6.143 us 0.5212 us 1.529 us 6.100 us 8.400 us 1.00
    WithToUpper 1000 69.776 us 9.5416 us 27.833 us 68.650 us 108.815 us 2.60
    WithToUpperInvariant 1000 51.284 us 7.7945 us 22.860 us 38.700 us 89.290 us 1.85
    WithToLower 1000 49.520 us 5.6085 us 16.449 us 48.100 us 79.110 us 1.85
    WithToLowerInvariant 1000 27.000 us 0.7370 us 2.103 us 26.850 us 30.375 us 1.00
    WithToUpper 10000 241.221 us 4.0480 us 3.588 us 240.900 us 246.560 us 1.68
    WithToUpperInvariant 10000 339.370 us 42.4036 us 125.028 us 381.950 us 594.760 us 1.48
    WithToLower 10000 246.861 us 15.7924 us 45.565 us 257.250 us 302.875 us 1.12
    WithToLowerInvariant 10000 143.529 us 2.1542 us 1.910 us 143.500 us 146.105 us 1.00
    WithToUpper 100000 2,165.838 us 84.7013 us 223.137 us 2,118.900 us 2,875.800 us 1.66
    WithToUpperInvariant 100000 1,885.329 us 36.8408 us 63.548 us 1,894.500 us 1,967.020 us 1.41
    WithToLower 100000 1,478.696 us 23.7192 us 50.547 us 1,472.100 us 1,571.330 us 1.10
    WithToLowerInvariant 100000 1,335.950 us 18.2716 us 35.203 us 1,330.100 us 1,404.175 us 1.00
    WithToUpper 1000000 20,936.247 us 414.7538 us 1,163.014 us 20,905.150 us 22,928.350 us 1.64
    WithToUpperInvariant 1000000 19,056.983 us 368.7473 us 287.894 us 19,085.400 us 19,422.880 us 1.41
    WithToLower 1000000 14,266.714 us 204.2906 us 181.098 us 14,236.500 us 14,593.035 us 1.06
    WithToLowerInvariant 1000000 13,464.127 us 266.7547 us 327.599 us 13,511.450 us 13,926.495 us 1.00

    Let’s see it as the usual Log10 plot:

    ToUpper vs ToLower comparison in C#: performance benchmark

    We can notice a few points:

    1. The ToUpper family is generally slower than the ToLower family;
    2. The Invariant family is faster than the non-Invariant one; we will see more below;

    So, if you have to normalize strings using the same casing, ToLowerInvariant is the best choice.

    Tip #5: OrdinalIgnoreCase vs InvariantCultureIgnoreCase: logically (almost) equivalent, but with different performance

    Comparing strings is trivial: the string.Compare method is all you need.

    There are several modes to compare strings: you can specify the comparison rules by setting the comparisonType parameter, which accepts a StringComparison value.

    [MemoryDiagnoser]
    public class StringCompareOrdinalVsInvariant()
    {
        [Params(100, 1000, 10_000, 100_000, 1_000_000)]
        public int Size;
    
        public string[] AllStrings { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            AllStrings = StringArrayGenerator.Generate(Size, "hello!", "HELLO!");
        }
    
        [Benchmark(Baseline = true)]
        public void WithOrdinalIgnoreCase()
        {
            foreach (string s in AllStrings)
            {
                _ = string.Equals(s, "hello!", StringComparison.OrdinalIgnoreCase);
            }
        }
    
        [Benchmark]
        public void WithInvariantCultureIgnoreCase()
        {
            foreach (string s in AllStrings)
            {
                _ = string.Equals(s, "hello!", StringComparison.InvariantCultureIgnoreCase);
            }
        }
    }
    

    Let’s see the results:

    Method Size Mean Error StdDev Ratio
    WithOrdinalIgnoreCase 100 2.380 us 0.2856 us 0.8420 us 1.00
    WithInvariantCultureIgnoreCase 100 7.974 us 0.7817 us 2.3049 us 3.68
    WithOrdinalIgnoreCase 1000 11.316 us 0.9170 us 2.6603 us 1.00
    WithInvariantCultureIgnoreCase 1000 35.265 us 1.5455 us 4.4591 us 3.26
    WithOrdinalIgnoreCase 10000 20.262 us 1.1801 us 3.3668 us 1.00
    WithInvariantCultureIgnoreCase 10000 225.892 us 4.4945 us 12.5289 us 11.41
    WithOrdinalIgnoreCase 100000 148.270 us 11.3234 us 32.8514 us 1.00
    WithInvariantCultureIgnoreCase 100000 1,811.144 us 35.9101 us 64.7533 us 12.62
    WithOrdinalIgnoreCase 1000000 2,050.894 us 59.5966 us 173.8460 us 1.00
    WithInvariantCultureIgnoreCase 1000000 18,138.063 us 360.1967 us 986.0327 us 8.87

    As you can see, there’s a HUGE difference between Ordinal and Invariant.

    When dealing with 100.000 items, StringComparison.InvariantCultureIgnoreCase is 12 times slower than StringComparison.OrdinalIgnoreCase!

    Ordinal vs InvariantCulture comparison in C#: performance benchmark

    Why? Also, why should we use one instead of the other?

    Have a look at this code snippet:

    var s1 = "Aa";
    var s2 = "A" + new string('\u0000', 3) + "a";
    
    string.Equals(s1, s2, StringComparison.InvariantCultureIgnoreCase); //True
    string.Equals(s1, s2, StringComparison.OrdinalIgnoreCase); //False
    

    As you can see, s1 and s2 represent equivalent, but not equal, strings. We can then deduce that OrdinalIgnoreCase checks for the exact values of the characters, while InvariantCultureIgnoreCase checks the string’s “meaning”.

    So, in most cases, you might want to use OrdinalIgnoreCase (as always, it depends on your use case!)

    Tip #6: Newtonsoft vs System.Text.Json: it’s a matter of memory allocation, not time

    For the last benchmark, I created the exact same model used as an example in the official documentation.

    This benchmark aims to see which JSON serialization library is faster: Newtonsoft or System.Text.Json?

    [MemoryDiagnoser]
    public class JsonSerializerComparison
    {
        [Params(100, 10_000, 1_000_000)]
        public int Size;
        List<User?> Users { get; set; }
    
        [IterationSetup]
        public void Setup()
        {
            Users = UsersCreator.GenerateUsers(Size);
        }
    
        [Benchmark(Baseline = true)]
        public void WithJson()
        {
            foreach (User? user in Users)
            {
                var asString = System.Text.Json.JsonSerializer.Serialize(user);
    
                _ = System.Text.Json.JsonSerializer.Deserialize<User?>(asString);
            }
        }
    
        [Benchmark]
        public void WithNewtonsoft()
        {
            foreach (User? user in Users)
            {
                string asString = Newtonsoft.Json.JsonConvert.SerializeObject(user);
                _ = Newtonsoft.Json.JsonConvert.DeserializeObject<User?>(asString);
            }
        }
    }
    

    As you might know, the .NET team has added lots of performance improvements to the JSON Serialization functionalities, and you can really see the difference!

    Method Size Mean Error StdDev Median Ratio RatioSD Gen0 Gen1 Allocated Alloc Ratio
    WithJson 100 2.063 ms 0.1409 ms 0.3927 ms 1.924 ms 1.00 0.00 292.87 KB 1.00
    WithNewtonsoft 100 4.452 ms 0.1185 ms 0.3243 ms 4.391 ms 2.21 0.39 882.71 KB 3.01
    WithJson 10000 44.237 ms 0.8787 ms 1.3936 ms 43.873 ms 1.00 0.00 4000.0000 1000.0000 29374.98 KB 1.00
    WithNewtonsoft 10000 78.661 ms 1.3542 ms 2.6090 ms 78.865 ms 1.77 0.08 14000.0000 1000.0000 88440.99 KB 3.01
    WithJson 1000000 4,233.583 ms 82.5804 ms 113.0369 ms 4,202.359 ms 1.00 0.00 484000.0000 1000.0000 2965741.56 KB 1.00
    WithNewtonsoft 1000000 5,260.680 ms 101.6941 ms 108.8116 ms 5,219.955 ms 1.24 0.04 1448000.0000 1000.0000 8872031.8 KB 2.99

    As you can see, Newtonsoft is 2x slower than System.Text.Json, and it allocates 3x the memory compared with the other library.

    So, well, if you don’t use library-specific functionalities, I suggest you replace Newtonsoft with System.Text.Json.

    Wrapping up

    In this article, we learned that even tiny changes can make a difference in the long run.

    Let’s recap some:

    1. Using StringBuilder is generally WAY faster than using string concatenation unless you need to concatenate 2 to 4 strings;
    2. Sometimes, the difference is not about execution time but memory usage;
    3. EndsWith and StartsWith perform better if you look for a char instead of a string. If you think of it, it totally makes sense!
    4. More often than not, string.IsNullOrWhiteSpace performs better checks than string.IsNullOrEmpty; however, there is a huge difference in terms of performance, so you should pick the correct method depending on the usage;
    5. ToUpper and ToLower look similar; however, ToLower is quite faster than ToUpper;
    6. Ordinal and Invariant comparison return the same value for almost every input; but Ordinal is faster than Invariant;
    7. Newtonsoft performs similarly to System.Text.Json, but it allocates way more memory.

    This article first appeared on Code4IT 🐧

    My suggestion is always the same: take your time to explore the possibilities! Toy with your code, try to break it, benchmark it. You’ll find interesting takes!

    I hope you enjoyed this article! Let’s keep in touch on Twitter or LinkedIn! 🤜🤛

    Happy coding!

    🐧





    Source link

  • 7 Useful Tips to Consider When Starting a Trucking Business


    There are many business lines in the world that are not easy to manage, and the trucking business is one of them. This industry is one of the booming industries in many countries.

    Nowadays, many business owners are trying to take part in this industry. Over the past years, this business has shown constant growth, which has made it a popular business line. If you are planning to start a trucking business, then you have to understand the complex jargon of this field. Along with that, you need to get a DOT authority for operating a business in your State.

    In this blog, you will find out how you can start and run your trucking business successfully.

    Do your research

    To hit the jackpot, the first thing you need to do is to crack the nuts. This means you will have to research the market and needs.

    By doing in-depth research, you will be able to identify your business niche in the trucking industry. Are you interested in transporting goods or using a truck for mobile billboards? These are only two examples, but when you research it, you will definitely find more possibilities in it.

    After that, it will be easy for you to develop a business plan.

    Find your target market

    Another one of the leading business strategies is finding and understanding the target audience. Once you understand for whom you will offer your services and what their needs are, it will become easy for you to offer the services and make more sales.

    It will be a wise decision if you develop your business strategy according to the niche market. By following this approach, you can ensure that your operations are cohesive and on track. When you tailor your trucking services according to the needs of your clients, in results your business will be able to earn a reputation and revenue.

    Finance your fleet

    Businesses are all about heavy investment, no matter the size or scale of your startup. When it comes to the trucking business, you will be surprised to know the buying cost of trucks. When planning the finances for buying trucks, you will also have to prepare for the maintenance costs. You can find many financing options to start your business.

    You can also start your own company with new vehicles or can consider investing in offers for used commercial vehicles and construction machinery.

    Make it legal

    It is crucial for business owners to meet all the legal requirements to operate their businesses in State. Without legal recognition or approval, the federal ministry can take charge of you, and you could end up losing your business.

    Many people enter the trucking business without knowing that it is highly regulated. You will need to get a permit or authority to operate your business activities interstate. You will also need to file for a DOT MC Number in your State.

    Ensure that your business complies with the applicable laws for maintaining legitimacy.

    Invest on technology

    Technology is the future, and especially for trucking business startups, you should realize its importance earlier. Technology is about to dominate services and different businesses. With technology, you will provide numerous benefits to your business.

    When it comes to transporting business, you will have to track and manage the orders. For this, it is crucial for you to use mobile applications or websites to promote your business and make it visible. If you cannot afford oversized technological items in your business, you can still add basics like GPS systems, smart cameras, and more.

    Learn your competition

    When you research your market, you should also study your competitors. It will help you to understand the threats and weaknesses that already existing businesses are facing. This way, you will come up with innovative business strategies and fill the needs of the clients.

    You can also offer the most competitive prices from other truckers and brokers with reasonable margins, so a good number of clients will attract your business.

    Pro tip:

    You should always connect directly with consigners so you will pass the benefits to your clients through a reduction in prices.

    Final note:

    There is no doubt in it that the trucking business has been booming over the years, and it has brought gold for owners. If you get the fundamentals right, being new in the market, you can also harvest the jackpot.



    Source link

  • Golden Tips To Improve Your Essay Writing Skills


    Writing an essay is one of the many tasks you’ll face as a student, so it’s essential to have good essay-writing skills. Strong writing skills can help you excel in your classes, standardized tests, and workplace. Fortunately, there are many ways you can improve your essay-writing skills. This article will provide golden tips to help you become a better essay writer.

    Seek Professional Writing Help

    Seeking professional writing help is one of the golden tips to improve your essay writing skills because it gives you access to experienced and knowledgeable writers who can help you craft a high-quality essay. Essay writing can be challenging and time-consuming for many students, particularly those needing strong writing skills or more confidence in writing a good essay.

    With professional writing help, you can get personalized feedback, guidance, and support to ensure your essay is of the highest quality. Professional writing help can also allow you to learn from the expertise of experienced writers, enabling you to improve your essay-writing skills. Students can look into platforms like www.vivaessays.com/essay-writer/ to get the needed assistance.

    Read Widely

    Another crucial tip for improving your essay writing skills is to read widely. Reading other people’s work can give you a better insight into what makes a good essay and can help you to develop your writing style. Reading other people’s work can also help gain new knowledge and ideas.

    Additionally, reading widely allows you to better understand grammar and sentence structure, which will help you construct your sentences. Finally, reading widely can help you develop your critical thinking skills and allow you to compare and contrast different ideas and viewpoints. All of these skills will be beneficial when writing your essays.

    Practice!

    They say that practice makes perfect, and this is certainly true when it comes to essay writing. You can improve your essays by consistently practicing and honing your writing skills. Practicing can help you become more comfortable with the structure of an essay and become familiar with the conventions of essay writing.

    Additionally, practicing can help you become more aware of which words and phrases work best in an essay, as well as help you become a more effective and clear communicator. Practicing can also help you become more confident in your writing and can help you identify any weak areas that need improvement. In short, practicing can help you hone your skills and make you a better essay writer.

    Have Someone Else Review Your Work

    Having a third eye review of your work can help you identify areas of improvement in your essay-writing skills. It can see you identify areas where you may be using too many words or where your writing may be confusing or unclear. It can also aid in identifying areas where you may be making the same mistakes or where you may be repeating yourself. Furthermore, it can help you identify weak points in your argument or areas where you may need to provide more evidence or detail.

    Finally, it can help you identify any grammar, spelling, or punctuation mistakes that you may have made. Ultimately, having someone review your work can help you become a better essay writer by highlighting areas you need to improve and providing constructive feedback.

    Have A Study Buddy

    Having a study buddy or group can help improve your essay-writing skills by providing a constructive environment for peer review. The group members can read each other’s work, offer feedback and criticism, and discuss ways to improve the essay. This can help identify common mistakes and improvement areas and provide insight on how to structure an essay for clarity and effectiveness. Additionally, studying with a group can keep you motivated and on task. It can give a sense of camaraderie and support when tackling a complex writing task.

    Work On Your Grammar, Spelling, And Punctuation Skills

    Lastly, improving your grammar, spelling, and punctuation skills is essential for improving your essay writing skills. Good grammar, spelling, and punctuation are the foundation of effective communication. If your writing is filled with errors, your message may be lost, and your essay will not make the grade.

    Furthermore, when you write an essay, it is essential to remember the conventions of grammar, spelling, and punctuation. This will help ensure that your essay is straightforward to read. Additionally, if you can use correct grammar, spelling, and punctuation correctly, it will make your essay appear more professional and polished. Therefore, improving your grammar, spelling, and punctuation skills is essential to improving your essay writing skills.

    Conclusion

    Essays are part of every student’s life, so it’s crucial to have good essay-writing skills. Fortunately, there are many tips and strategies to help you become a better essay writer. These include seeking professional writing help, reading widely, practicing, having someone else review your work, and having a study buddy or group. Following these golden tips can improve your essay-writing skills and become a better essay writer.



    Source link