- Make App Priority Mac App
- Make App Priority Mac Download
- Make App Priority Mac Login
- Make App Priority Macbook
The default priority is zero, positive values are 'nicer' (that is lower priority) and negative values are 'less nice' (higher priority). Looks like Mac OS runs from +10 to -10. Use renice to change the priority of a process already running (from the renice man page on 10.5). Make sure that no malware is running / slowing down your mac. In Terminal, identify running processes: ps -ef; Make sure to close any unneeded resource intensive applications and processes, e.g. Check the CPU, Memory, Disk usage at least with Activity Monitor. You just need to own a Mac and it will help you to set the priority of the WiFi networks. Steps to Prioritize WiFi Networks on iPhone, iPad, iPod Touch or Mac An experienced or a die-hard Mac user will undoubtedly be familiar with that Wi-Fi networks or hotspots info is stored on a Mac, or stored through an iOS device, are shared across several. Turn on Focused Inbox, and move items between Focused and Other folders, Includes instructions for Outlook 2016, Outlook on the web, Outlook.com, Hotmail,com, Mail and Calendar for Windows 10, and Outlook 2016 for Mac.
Each process (application) in OS X or iOS is made up of one or more threads, each of which represents a single path of execution through the application's code. Every application starts with a single thread, which runs the application's
main
function. Applications can spawn additional threads, each of which executes the code of a specific function.When an application spawns a new thread, that thread becomes an independent entity inside of the application's process space. Each thread has its own execution stack and is scheduled for runtime separately by the kernel. A thread can communicate with other threads and other processes, perform I/O operations, and do anything else you might need it to do. Because they are inside the same process space, however, all threads in a single application share the same virtual memory space and have the same access rights as the process itself.
This chapter provides an overview of the thread technologies available in OS X and iOS along with examples of how to use those technologies in your applications.
Note: For a historical look at the threading architecture of Mac OS, and for additional background information on threads, see Technical Note TN2028, “Threading Architectures”.
Thread Costs
Threading has a real cost to your program (and the system) in terms of memory use and performance. Each thread requires the allocation of memory in both the kernel memory space and your program’s memory space. The core structures needed to manage your thread and coordinate its scheduling are stored in the kernel using wired memory. Your thread’s stack space and per-thread data is stored in your program’s memory space. Most of these structures are created and initialized when you first create the thread—a process that can be relatively expensive because of the required interactions with the kernel.
Table 2-1 quantifies the approximate costs associated with creating a new user-level thread in your application. Some of these costs are configurable, such as the amount of stack space allocated for secondary threads. The time cost for creating a thread is a rough approximation and should be used only for relative comparisons with each other. Thread creation times can vary greatly depending on processor load, the speed of the computer, and the amount of available system and program memory.
Item | Approximate cost | Notes |
---|---|---|
Kernel data structures | Approximately 1 KB https://litenergy935.weebly.com/ableton-9-free-download-windows.html. Best tool for mac maintenance. | This memory is used to store the thread data structures and attributes, much of which is allocated as wired memory and therefore cannot be paged to disk. |
Stack space | 512 KB (secondary threads) 8 MB (OS X main thread) 1 MB (iOS main thread) | The minimum allowed stack size for secondary threads is 16 KB and the stack size must be a multiple of 4 KB. The space for this memory is set aside in your process space at thread creation time, but the actual pages associated with that memory are not created until they are needed. |
Creation time | Approximately 90 microseconds | Pdf writer for mac free download. This value reflects the time between the initial call to create the thread and the time at which the thread’s entry point routine began executing. The figures were determined by analyzing the mean and median values generated during thread creation on an Intel-based iMac with a 2 GHz Core Duo processor and 1 GB of RAM running OS X v10.5. |
Note: Because of their underlying kernel support, operation objects can often create threads more quickly. Rather than creating threads from scratch every time, they use pools of threads already residing in the kernel to save on allocation time. For more information about using operation objects, see Concurrency Programming Guide.
Another cost to consider when writing threaded code is the production costs. Designing a threaded application can sometimes require fundamental changes to the way you organize your application’s data structures. Making those changes might be necessary to avoid the use of synchronization, which can itself impose a tremendous performance penalty on poorly designed applications. Designing those data structures, and debugging problems in threaded code, can increase the time it takes to develop a threaded application. Avoiding those costs can create bigger problems at runtime, however, if your threads spend too much time waiting on locks or doing nothing.
Creating a Thread
Creating low-level threads is relatively simple. In all cases, you must have a function or method to act as your thread’s main entry point and you must use one of the available thread routines to start your thread. The following sections show the basic creation process for the more commonly used thread technologies. Threads created using these techniques inherit a default set of attributes, determined by the technology you use. For information on how to configure your threads, see Configuring Thread Attributes.
Using NSThread
![Make app priority mac login Make app priority mac login](/uploads/1/3/3/8/133890102/791717243.png)
There are two ways to create a thread using the
NSThread
class:- Use the
detachNewThreadSelector:toTarget:withObject:
class method to spawn the new thread. - Create a new
NSThread
object and call itsstart
method. (Supported only in iOS and OS X v10.5 and later.)
Both techniques create a detached thread in your application. A detached thread means that the thread’s resources are automatically reclaimed by the system when the thread exits. It also means that your code does not have to join explicitly with the thread later.
Because the
detachNewThreadSelector:toTarget:withObject:
method is supported in all versions of OS X, it is often found in existing Cocoa applications that use threads. To detach a new thread, you simply provide the name of the method (specified as a selector) that you want to use as the thread’s entry point, the object that defines that method, and any data you want to pass to the thread at startup. The following example shows a basic invocation of this method that spawns a thread using a custom method of the current object. Prior to OS X v10.5, you used the
NSThread
class primarily to spawn threads. Although you could get an NSThread
object and access some thread attributes, you could only do so from the thread itself after it was running. In OS X v10.5, support was added for creating NSThread
objects without immediately spawning the corresponding new thread. (This support is also available in iOS.) This support made it possible to get and set various thread attributes prior to starting the thread. It also made it possible to use that thread object to refer to the running thread later. The simple way to initialize an
NSThread
object in OS X v10.5 and later is to use the initWithTarget:selector:object:
method. This method takes the exact same information as the detachNewThreadSelector:toTarget:withObject:
method and uses it to initialize a new NSThread
instance. It does not start the thread, however. To start the thread, you call the thread object’s start
method explicitly, as shown in the following example: Note: An alternative to using the
initWithTarget:selector:object:
method is to subclass NSThread
and override its main
method. You would use the overridden version of this method to implement your thread’s main entry point. For more information, see the subclassing notes in NSThread Class Reference. If you have an
NSThread
object whose thread is currently running, one way you can send messages to that thread is to use the performSelector:onThread:withObject:waitUntilDone:
method of almost any object in your application. Support for performing selectors on threads (other than the main thread) was introduced in OS X v10.5 and is a convenient way to communicate between threads. (This support is also available in iOS.) The messages you send using this technique are executed directly by the other thread as part of its normal run-loop processing. (Of course, this does mean that the target thread has to be running in its run loop; see Run Loops.) You may still need some form of synchronization when you communicate this way, but it is simpler than setting up communications ports between the threads. Note: Although good for occasional communication between threads, you should not use the
performSelector:onThread:withObject:waitUntilDone:
method for time critical or frequent communication between threads. For a list of other thread communication options, see Setting the Detached State of a Thread.
Using POSIX Threads
OS X and iOS provide C-based support for creating threads using the POSIX thread API. This technology can actually be used in any type of application (including Cocoa and Cocoa Touch applications) and might be more convenient if you are writing your software for multiple platforms. The POSIX routine you use to create threads is called, appropriately enough,
pthread_create
. Listing 2-1 shows two custom functions for creating a thread using POSIX calls. The
LaunchThread
function creates a new thread whose main routine is implemented in the PosixThreadMainRoutine
function. Because POSIX creates threads as joinable by default, this example changes the thread’s attributes to create a detached thread. Marking the thread as detached gives the system a chance to reclaim the resources for that thread immediately when it exits. Listing 2-1 Creating a thread in C
If you add the code from the preceding listing to one of your source files and call the
LaunchThread
function, it would create a new detached thread in your application. Of course, new threads created using this code would not do anything useful. The threads would launch and almost immediately exit. To make things more interesting, you would need to add code to the PosixThreadMainRoutine
function to do some actual work. To ensure that a thread knows what work to do, you can pass it a pointer to some data at creation time. You pass this pointer as the last parameter of the pthread_create
function. To communicate information from your newly created thread back to your application’s main thread, you need to establish a communications path between the target threads. For C-based applications, there are several ways to communicate between threads, including the use of ports, conditions, or shared memory. For long-lived threads, you should almost always set up some sort of inter-thread communications mechanism to give your application’s main thread a way to check the status of the thread or shut it down cleanly when the application exits.
For more information about POSIX thread functions, see the
pthread
man page. Using NSObject to Spawn a Thread
In iOS and OS X v10.5 and later, all objects have the ability to spawn a new thread and use it to execute one of their methods. The
performSelectorInBackground:withObject:
method creates a new detached thread and uses the specified method as the entry point for the new thread. For example, if you have some object (represented by the variable myObj
) and that object has a method called doSomething
that you want to run in a background thread, you could use the following code to do that: The effect of calling this method is the same as if you called the
detachNewThreadSelector:toTarget:withObject:
method of NSThread
with the current object, selector, and parameter object as parameters. The new thread is spawned immediately using the default configuration and begins running. Inside the selector, you must configure the thread just as you would any thread. For example, you would need to set up an autorelease pool (if you were not using garbage collection) and configure the thread’s run loop if you planned to use it. For information on how to configure new threads, see Configuring Thread Attributes. Using POSIX Threads in a Cocoa Application
Although the
NSThread
class is the main interface for creating threads in Cocoa applications, you are free to use POSIX threads instead if doing so is more convenient for you. For example, you might use POSIX threads if you already have code that uses them and you do not want to rewrite it. If you do plan to use the POSIX threads in a Cocoa application, you should still be aware of the interactions between Cocoa and threads and obey the guidelines in the following sections. Protecting the Cocoa Frameworks
For multithreaded applications, Cocoa frameworks use locks and other forms of internal synchronization to ensure they behave correctly. To prevent these locks from degrading performance in the single-threaded case, however, Cocoa does not create them until the application spawns its first new thread using the
NSThread
class. If you spawn threads using only POSIX thread routines, Cocoa does not receive the notifications it needs to know that your application is now multithreaded. When that happens, operations involving the Cocoa frameworks may destabilize or crash your application. To let Cocoa know that you intend to use multiple threads, all you have to do is spawn a single thread using the
NSThread
class and let that thread immediately exit. Your thread entry point need not do anything. Just the act of spawning a thread using NSThread
is enough to ensure that the locks needed by the Cocoa frameworks are put in place. If you are not sure if Cocoa thinks your application is multithreaded or not, you can use the
isMultiThreaded
method of NSThread
to check. Mixing POSIX and Cocoa Locks
It is safe to use a mixture of POSIX and Cocoa locks inside the same application. Cocoa lock and condition objects are essentially just wrappers for POSIX mutexes and conditions. For a given lock, however, you must always use the same interface to create and manipulate that lock. In other words, you cannot use a Cocoa
NSLock
object to manipulate a mutex you created using the pthread_mutex_init
function, and vice versa.Configuring Thread Attributes
After you create a thread, and sometimes before, you may want to configure different portions of the thread environment. The following sections describe some of the changes you can make and when you might make them.
Configuring the Stack Size of a Thread
For each new thread you create, the system allocates a specific amount of memory in your process space to act as the stack for that thread. The stack manages the stack frames and is also where any local variables for the thread are declared. The amount of memory allocated for threads is listed in Thread Costs.
If you want to change the stack size of a given thread, you must do so before you create the thread. All of the threading technologies provide some way of setting the stack size, although setting the stack size using
NSThread
is available only in iOS and OS X v10.5 and later. Table 2-2 lists the different options for each technology. Technology | Option |
---|---|
Cocoa | In iOS and OS X v10.5 and later, allocate and initialize an NSThread object (do not use the detachNewThreadSelector:toTarget:withObject: method). Before calling the start method of the thread object, use the setStackSize: method to specify the new stack size. |
POSIX | Create a new pthread_attr_t structure and use the pthread_attr_setstacksize function to change the default stack size. Pass the attributes to the pthread_create function when creating your thread. |
Multiprocessing Services | Pass the appropriate stack size value to the MPCreateTask function when you create your thread. |
Configuring Thread-Local Storage
Each thread maintains a dictionary of key-value pairs that can be accessed from anywhere in the thread. You can use this dictionary to store information that you want to persist throughout the execution of your thread. For example, you could use it to store state information that you want to persist through multiple iterations of your thread’s run loop.
Cocoa and POSIX store the thread dictionary in different ways, so you cannot mix and match calls to the two technologies. As long as you stick with one technology inside your thread code, however, the end results should be similar. In Cocoa, you use the
threadDictionary
method of an NSThread
object to retrieve an NSMutableDictionary
object, to which you can add any keys required by your thread. In POSIX, you use the pthread_setspecific
and pthread_getspecific
functions to set and get the keys and values of your thread. Setting the Detached State of a Thread
![Mac Mac](/uploads/1/3/3/8/133890102/525377724.jpg)
Most high-level thread technologies create detached threads by default. In most cases, detached threads are preferred because they allow the system to free up the thread’s data structures immediately upon completion of the thread. Detached threads also do not require explicit interactions with your program. The means of retrieving results from the thread is left to your discretion. By comparison, the system does not reclaim the resources for joinable threads until another thread explicitly joins with that thread, a process which may block the thread that performs the join.
You can think of joinable threads as akin to child threads. Although they still run as independent threads, a joinable thread must be joined by another thread before its resources can be reclaimed by the system. Joinable threads also provide an explicit way to pass data from an exiting thread to another thread. Just before it exits, a joinable thread can pass a data pointer or other return value to the
pthread_exit
function. Another thread can then claim this data by calling the pthread_join
function. Important: At application exit time, detached threads can be terminated immediately but joinable threads cannot. Each joinable thread must be joined before the process is allowed to exit. Joinable threads may therefore be preferable in cases where the thread is doing critical work that should not be interrupted, such as saving data to disk.
If you do want to create joinable threads, the only way to do so is using POSIX threads. POSIX creates threads as joinable by default. To mark a thread as detached or joinable, modify the thread attributes using the
pthread_attr_setdetachstate
function prior to creating the thread. After the thread begins, you can change a joinable thread to a detached thread by calling the pthread_detach
function. For more information about these POSIX thread functions, see the pthread
man page. For information on how to join with a thread, see the pthread_join
man page.Setting the Thread Priority
Any new thread you create has a default priority associated with it. The kernel’s scheduling algorithm takes thread priorities into account when determining which threads to run, with higher priority threads being more likely to run than threads with lower priorities. Higher priorities do not guarantee a specific amount of execution time for your thread, just that it is more likely to be chosen by the scheduler when compared to lower-priority threads.
Important: It is generally a good idea to leave the priorities of your threads at their default values. Increasing the priorities of some threads also increases the likelihood of starvation among lower-priority threads. If your application contains high-priority and low-priority threads that must interact with each other, the starvation of lower-priority threads may block other threads and create performance bottlenecks.
If you do want to modify thread priorities, both Cocoa and POSIX provide a way to do so. For Cocoa threads, you can use the
setThreadPriority:
class method of NSThread
to set the priority of the currently running thread. For POSIX threads, you use the pthread_setschedparam
function. For more information, see NSThread Class Reference or pthread_setschedparam
man page.Writing Your Thread Entry Routine
For the most part, the structure of your thread’s entry point routines is the same in OS X as it is on other platforms. You initialize your data structures, do some work or optionally set up a run loop, and clean up when your thread’s code is done. Depending on your design, there may be some additional steps you need to take when writing your entry routine.
Creating an Autorelease Pool
Applications that link in Objective-C frameworks typically must create at least one autorelease pool in each of their threads. If an application uses the managed model—where the application handles the retaining and releasing of objects—the autorelease pool catches any objects that are autoreleased from that thread.
If an application uses garbage collection instead of the managed memory model, creation of an autorelease pool is not strictly necessary. The presence of an autorelease pool in a garbage-collected application is not harmful, and for the most part is simply ignored. It is allowed for cases where a code module must support both garbage collection and the managed memory model. In such a case, the autorelease pool must be present to support the managed memory model code and is simply ignored if the application is run with garbage collection enabled.
If your application uses the managed memory model, creating an autorelease pool should be the first thing you do in your thread entry routine. Similarly, destroying this autorelease pool should be the last thing you do in your thread. This pool ensures that autoreleased objects are caught, although it does not release them until the thread itself exits. Listing 2-2 shows the structure of a basic thread entry routine that uses an autorelease pool.
Listing 2-2 Defining your thread entry point routine
Because the top-level autorelease pool does not release its objects until the thread exits, long-lived threads should create additional autorelease pools to free objects more frequently. For example, a thread that uses a run loop might create and release an autorelease pool each time through that run loop. Releasing objects more frequently prevents your application’s memory footprint from growing too large, which can lead to performance problems. As with any performance-related behavior though, you should measure the actual performance of your code and tune your use of autorelease pools appropriately. Pandora app for windows 10.
For more information on memory management and autorelease pools, see Advanced Memory Management Programming Guide.
Make App Priority Mac App
Setting Up an Exception Handler
Free virtual audio board for studio sound mac. If your application catches and handles exceptions, your thread code should be prepared to catch any exceptions that might occur. Although it is best to handle exceptions at the point where they might occur, failure to catch a thrown exception in a thread causes your application to exit. Installing a final try/catch in your thread entry routine allows you to catch any unknown exceptions and provide an appropriate response.
You can use either the C++ or Objective-C exception handling style when building your project in Xcode. For information about setting how to raise and catch exceptions in Objective-C, see Exception Programming Topics.
Setting Up a Run Loop
When writing code you want to run on a separate thread, you have two options. The first option is to write the code for a thread as one long task to be performed with little or no interruption, and have the thread exit when it finishes. The second option is put your thread into a loop and have it process requests dynamically as they arrive. The first option requires no special setup for your code; you just start doing the work you want to do. The second option, however, involves setting up your thread’s run loop.
OS X and iOS provide built-in support for implementing run loops in every thread. The app frameworks start the run loop of your application’s main thread automatically. If you create any secondary threads, you must configure the run loop and start it manually.
For information on using and configuring run loops, see Run Loops.
Terminating a Thread
The recommended way to exit a thread is to let it exit its entry point routine normally. Although Cocoa, POSIX, and Multiprocessing Services offer routines for killing threads directly, the use of such routines is strongly discouraged. Killing a thread prevents that thread from cleaning up after itself. Memory allocated by the thread could potentially be leaked and any other resources currently in use by the thread might not be cleaned up properly, creating potential problems later.
If you anticipate the need to terminate a thread in the middle of an operation, you should design your threads from the outset to respond to a cancel or exit message. For long-running operations, this might mean stopping work periodically and checking to see if such a message arrived. If a message does come in asking the thread to exit, the thread would then have the opportunity to perform any needed cleanup and exit gracefully; otherwise, it could simply go back to work and process the next chunk of data.
One way to respond to cancel messages is to use a run loop input source to receive such messages. Listing 2-3 shows the structure of how this code might look in your thread’s main entry routine. (The example shows the main loop portion only and does not include the steps for setting up an autorelease pool or configuring the actual work to do.) The example installs a custom input source on the run loop that presumably can be messaged from another one of your threads; for information on setting up input sources, see Configuring Run Loop Sources. After performing a portion of the total amount of work, the thread runs the run loop briefly to see if a message arrived on the input source. If not, the run loop exits immediately and the loop continues with the next chunk of work. Because the handler does not have direct access to the
exitNow
local variable, the exit condition is communicated through a key-value pair in the thread dictionary. Listing 2-3 Checking for an exit condition during a long job
Copyright © 2014 Apple Inc. All Rights Reserved. Terms of Use | Privacy Policy | Updated: 2014-07-15
From DD-WRT Wiki
Jump to: navigation, search
You are here: DD-WRT wiki mainpage / Web-GUI / NAT/QoS / QoS
English • Deutsch • Español • Français • Italiano • 日本語 • Polski • Português • Русский • Svenska • 中文(中国大陆) • 中文(台灣) • |
|
[edit]Introduction
Be using a build NO OLDER than r32170 before proceeding!
Quality of Service (QoS) is a method to guarantee a bandwidth relationship between individual applications or protocols. This is very handy when you max out your connection so that you can allow for each application to have some bandwidth and so that no single application can take down the internet connection. This allows, for example, a full speed download via FTP without causing jittering on a VOIP chat. The FTP will slow down slightly as bandwidth is needed for the VOIP, provided VOIP was given greater priority.
Please note, as of 336XX, if QoS is enabled, SFE (Shortcut Forwarding Engine) is disabled, even if it shows up as enabled in the GUI, it is disabled.
Make App Priority Mac Download
If you plan on using QoS, please read Priorities explained and Precedence before going any farther.
[edit]Priorities explained
- Maximum - This class offers maximum priority and should be used sparingly.
- Premium - Second highest bandwidth class, by default handshaking and ICMP packets fall into this class. Most VoIP and video services will function good in this class if Express is insufficient.
- Express - The Express class is for interactive applications that require bandwidth above standard services so that interactive apps run smoothly.
- Standard - All services that are not specifically classed will fall under standard class.
- Bulk - The bulk class is only allocated remaining bandwidth when the remaining classes are idle. If the line is full of traffic from other classes, Bulk will only be allocated 1% of total set limit. Use this class for P2P and downloading services like FTP.
Bandwidth is allocated based on the following 'minimum to maximum' percentages of downlink and uplink values for each class as of current builds:
- Maximum: 75% - 100%
- Premium: 50% - 100%
- Express: 25% - 100%
- Standard: 15% - 100%
- Bulk: 5% - 100%
What this means is that if you have 10,000kbit of uplink traffic, 'Standard' class traffic can be reduced and de-prioritized to 15% or 1,500kbit when a concurrent express or higher priority service requires the down/uplink pipe at the same time.
Check which priorities are used with the command below:
Then scroll down to the Chain SVQOS_SVCS section.
[edit]TCP Packet Priority
Builds before r21061 will not have this option. Update your build if you dont have it, stay up to date.
Google photos app mac osx. Prioritize small TCP-packets with the following flags: ACK/SYN/FIN/RST
For detailed info on what these packets do see: http://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure
It is highly recommended to have at least, SYN, FIN & RST checked, OR, none at all. ACK can go both ways as P2P intensive applications such as uTorrent etc involve a lot of ACKs, so theoretically prioritizing ACKs means you 'prioritized P2P' though that is not entirely' accurate. Read up & do your own testing to find out whats best for your network. If you do not do large amounts of P2P activity on your network or none at all, then enable ACK prioritization.
[edit]Precedence
With all these ways of marking traffic its easy to get confused about how seemingly contradictory requirements are resolved. For example, what happens if you have an IP rule setting IP 192.168.1.2 to priority 'maximum' and have a MAC rule setting AA:BB:CC:DD:EE:FF to priority 'bulk'?
The order the precedence is as follows:
- (1st) MAC Priority
- (2nd) Netmask Priority
- (3rd) Interface Priority
- (4th) Services Priority
- (5th) Ethernet Port Priority
NOTE: Ethernet Port Priority only works on old 802.11g only models with ADMtek switch chips. If you don't have ethernet port priority listed, your router does not support it. Ethernet port priority is different than interface priority.
Make App Priority Mac Login
NOTE: Services can be used at the same time as netmask or MAC, such as limiting 192.168.1.2 to 6 Mbps down & 512 Kbps up while having http set to express, that device will have http packets prioritized within it's allocated bandwidth limit. This only applies to builds r21061 & newer
For netmask, the IP address entries are applied in the order that they appear in your netmask table. Only the first match applies. Example, if you have an entry marking 192.168.1.10/32 as bulk, followed by an entry ABOVE IT, marking 192.168.1.0/24 (all 192.168.1.X) as premium, the traffic from 192.168.1.10 would be marked bulk because it was the first match.
For services, The services entries are applied in the order that they appear in your services tables going from bottom to top. Again, only the first match will apply.
[edit]Initial Setup
- Log into the Web Interface
- Select the NAT/QoS tab and then the QoS sub-tab.
- Click 'Enable'
- Set Port to 'WAN'. This works for all QoS setups EXCEPT, when using QoS by interface on a BRIDGED interface under 'interface priority', UNBRIDGED interfaces work fine with WAN port setting. If you want to use QoS on a BRIDGED interface you must select port as 'LAN & WLAN', which also works for all other QoS setups, but with slightly more CPU usage.
- Select HTB as your packet scheduler if you have 'queuing discipline' listed below it, if not then use HFSC.
- Select FQ_CODEL as your queuing discipline.
- Set your download and upload speeds. You can use a speed test like Speedtest.net or dslreports.com/speedtest to check your actual connection speed. Some ISPs also provide their own bandwidth testing service, which may be more reliable than the links provided. Enter no higher than 95% of the values you measured into the proper fields. After you have everything set run the speed test again. If you get near 90% of your previous measurement in each direction then things are cool. If you get results which are way off then chances are that you have reversed these values. You must enter a value for the uplink field but if you want you can enter 0 for the downlink field in which case no QoS will occur in that direction, setting your downlink field to 0 isn't recommended.
It probably bugs you to set less than 100% of your available bandwidth in these fields but this is required. There will be a bottleneck somewhere in the system and QoS can only work if the bottleneck is in your router where it has control. The goal is to force the bottleneck to be in your router as opposed to some random location out on the wire over which you have no control. Some ISP's even have bursting ('powerboost') which will temporarily give you extra bandwidth when you first start using your connection but will later throttle down to a sustained rate. Fortunately there is usually a minimum level that you receive on a consistent basis and you must set your QoS limits below this minimum. The problem is finding this minimum and you may have to repeat speed tests many times before determining it. For this reason start with 80% of your measured speed and try things for a couple of days. If the performance is acceptable you can start to inch your levels up. If you go even 2% higher than you should be, your QoS will totally stop working (just too high) or randomly stop working (when your ISP node/DSLAM is slow aka saturated). This can lead to a lot of confusion on your part so get it working first by conservatively setting these speeds and then optimize later.
[edit]Prioritizing by Application (Skype, Http) or Port Range (P2P)
- Choose an available Service or Port Range from the list or create one, and then press 'Add' next to it.
- For P2P Applications, due to evolving protocols, encryption and obfuscation, it can be much better to define a port range [such as TCP/UDP, 60000-61000]. Set your P2P applications to operate within this range. This can significantly reduce the load on the router, avoid mis-identifying packets, and more efficiently shape your network traffic.
- Add all your other selected Services and Port Ranges here
- Choosing a Layer7 service based entry can work better than choosing a port range; though the router works harder as it has to dig into the packets beyond the header, to look at the data they contain.
If you wish to add more than one priority then use the 'Add' button to create more entries.
[edit]Prioritizing by Interface
Select your preferred interference, click add, then select the speed or priority you want. You can also limit ethernet ports this way as well (ethX or vlanX). Apps for os x 10.7.5. Any limits or priorities set are shared for that interface regardless how many clients are connected to it. Excellent for running a guest network/hotspot on eg, ath1.1, applying QoS on the entire interface makes it impossible for a greedy user to bypass it by MAC cloning, changing IPs etc, short of connecting to a different interface. The same interface can also be entered multiple times with different speed limits or priorities for different services, example, ath0 512/512 with ssl & ath0 0/1024 with http would mean ssl traffic on ath0 is limited to 512kbps down & up, http is unlimited on down (up to global limits is used) & limited to 1024 (1mbps) on up, remaining entered services are not limited (up to global limits for both directions).
[edit]Prioritizing by Netmask (IP address)
These are entered in CIDR notation including the network prefix.
For example, to specify a single IP address enter xxx.xxx.xxx.xxx/32. Be careful to enter netmask as /32 because leaving it /0 means ALL IPs!.
The netmask is the number of bits of the IP address to match. For example, the entry 192.168.1.0/24 matches 192.168.1.x addresses. An entry of 192.168.0.0/16 matches 192.168.x.x addresses. If you're unsure of how to create CIDR subnet masks and what they mean, then use a subnet calculator.
After you have filled it out, press 'add' next to it. If you want to add multiple entries (make sure to have order correct!) click 'save' before entering in another so any previous changes don't get deleted, only click 'apply' when you want to start testing your current changes displayed.
[edit]Prioritizing by MAC Address
In the case you want to prioritize traffic from a particular device without a static IP address on your LAN, you can prioritize by MAC Address. Enter the MAC Address of the device and press 'Add' next to it.
Make App Priority Macbook
[edit]How Do You Check What QoS Priorities Were Applied
The DDWRT web UI doesn't display any live traffic. Short of doing a practical test, you can get your hands dirty by checking the conntrack entries via telnet or ssh access in the router. When you're logged in run:
Then scroll down to the Chain SVQOS_SVCS section.
With the above iptables mangle command you can see the inbound/outbound chains, entered IPs/MACs/services & whats being matched where.
It will list out all currently open connection and protocol that is currently being routed by the router. This is what it would look like:
What you'll be interested to look at will be the first set of source and destination IP, including the port numbers. Next the presence of l7proto and the 'mark' field. The entries indicate the current live connection QoS priority applied on them based on the 'mark' field. The 'mark' values corresponds to the following:
- Maximum: 100
- Premium: 10
- Express: 20
- Standard: 30
- Bulk: 40
- (no QoS matched): 0
You may see 'mark=0' for some l7proto service even though they are in configured in the list of QoS rules. This may mean that the layer 7 pattern matching system didn't match a new or changed header for that protocol. Custom service on port matches will usually take care of these.
[edit]Time Based QoS
As described in this thread you can use CRON jobs to enable/disable QoS. This is just a simplistic approach but more complex things could be done if you put your mind to it. These commands will enable HTB QoS on the WAN port from 5PM to 1AM but you will still need to configure everything else in the GUI. If you want to use LAN&WLAN then change '`get_wanface`' to 'imq1'. To change the times, see the CRON page for information.
If you use HFSC then you would do something like this instead.
As described in this thread you can also set different rates at different times by doing something like this which changes the HTB rates.
Edit: This will cause trouble on current firmware releases greater than r21061. Use the predefined service handler to stop/start QoS instead. Even if needed, use imq1 instead of br0 for internal traffic shaping.
If you need to alter down/up rates edit the nvram variables before restarting wshaper
Retrieved from 'http://wiki.dd-wrt.com/wiki/index.php/Quality_of_Service'