C regex for validating url
Then all of the URLs to be tested are placed in an external text file (data.txt), which is placed in the same directory as the validating script.
One way you can further improve the function is to convert all URLs to lower case before inputting them into the validating function.It does have a low %overjudgement, but the problem is its high %slippage.You can also say that there is no perfect validating function, which has zero %slippage and zero %overjudgement.Based on the evaluation result, you can select validating function #1 since the risk of %slippage and %overjudgement is minimal.There might still be a lot of functions that can be developed that are not featured here and are more accurate than this function, but so far this function ranks well in Google search.I should mention that 99.9% of domains will fall into standard form ( handle.domain or )As such, You are definitely more likely to let a user enter a bad URL they did not intend because it validates then to let a uncommon domain actually be used.
As such- a much simpler regex would likely 'make more people happy' than being 100% correct to tech spec.
I am using the url validation built in laravel and PHP (filter_var/FILTER_VALIDATE_URL) but this does not work like I expected.
URL's such as bellow are validated: hp://htt://htts://So... I am building a URL Shortener for learning purposes and this issue came up.
Validating URLs is important to form handling and PHP data processing. os Csid=xlo8u8nl8m4t725 ftp:// ftp://[email protected]://EN. So the %slippage can be computed:%slippage=3/27 =11.11% (the lower percentage the better The URLs numbered 1 to 40 above are acceptable URLs.
Currently there are numerous solutions for validating URLs. This validating function over-judged 14 URLs (instead of marking the URL as valid, it is marked invalid).
If you're going to allow dotted IPs you should really allow 32-bit IPs too, e.g., and